On 18 December 2014 at 04:56, Teodor Sigaev <teo...@sigaev.ru> wrote:
>
> You could well be right, but it would be good to compare the numbers just
>> so we
>> know this for sure.
>>
> I wasn't right :(
>
> # set work_mem='64kB';
> # set enable_seqscan  = off;
> Patched: 1194.094 ms
> Master:  1765.338 ms
>
> > Are you seeing the same?
> Fixed too, the mistake was in supposition that current page becomes lossy
> in tbm_lossify. It's not always true because tbm_lossify() could lossify
> only part of pages and we don't know which. Thank you very much.
>
>
Oh, that makes sense. Though I wonder if you need to clear the caches at
all when calling tbm_lossify(). Surely it never becomes un-lossified and
plus, at least for lossy_page it would never be set to the current page
anyway, it's either going to be set to InvalidBlockNumber, or some other
previous page that was lossy. I also can't quite see the need to set page
to NULL. Surely doing this would just mean we'd have to lookup the page
again once tbm_lossify() is called if the next loop happened to be the same
page? I think this would only be needed if the hash lookup was going to
return a new instance of the page after we've lossified it, which from what
I can tell won't happen.

I've made these small changes, and just tweaked the comments a little.
(attached)

I've also attached some benchmark results using your original table from
up-thread. It seems that the caching if the page was seen as lossy is not
much of a help in this test case. Did you find another one where you saw
some better gains?

Regards

David Rowley

Attachment: tbm_cache_benchmark.xlsx
Description: MS-Excel 2007 spreadsheet

Attachment: tbm_cachepage-2.2_dr.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to