> > I definitely think it's worth it, even if it doesn't handle an
> > inline-compressed datum.
>
> Yeah.  I'm not certain how much benefit we could get there anyway.
> If the datum isn't out-of-line then there's a small upper limit on how
> big it can be and hence a small upper limit on how long it takes to
> decompress.  It's not clear that a complicated caching scheme would
> pay for itself.

Well there's a small upper limit per-instance but the aggregate could still be 
significant if you have a situation like btree scans which are repeatedly 
detoasting the same datum. Note that the "inline compressed" case includes 
packed varlenas which are being copied just to get their alignment right. It 
would be nice to get rid of that palloc/pfree bandwidth.

I don't really see a way to do this though. If we hook into the original 
datum's mcxt we could use the pointer itself as a key. But if the original 
datum comes from a buffer that doesn't work.

One thought I had -- which doesn't seem to go anywhere, but I thought was worth 
mentioning in case you see a way to leverage it that I don't -- is that if the 
toast key is already in the cache then deform_tuple could substitute the cached 
value directly instead of waiting for someone to detoast it. That means we can 
save all the subsequent trips to the toast cache manager. I'm not sure that 
would give us a convenient way to know when to unpin the toast cache entry 
though. It's possible that some code is aware that deform_tuple doesn't 
allocate anything currently and therefore doesn't set the memory context to 
anything that will live as long as the data it returns.


Incidentally, I'm on vacation and reading this via an awful webmail interface. 
So I'm likely to miss some interesting stuff for a couple weeks. I suppose the 
Snr ratio of the list is likely to move but I'm not sure which direction...

Reply via email to