Andres Freund <and...@2ndquadrant.com> writes:
> On 2015-01-16 12:15:35 -0500, Tom Lane wrote:
>> It strikes me that this patch leaves some lookups on the table,
>> specifically that it fails to avoid repeated hash_search lookups
>> inside tbm_page_is_lossy() in the situation where we're adding
>> new TIDs to an already-lossified page.  Is it worth adding a few
>> more lines to handle that case as well?

> There was a alternative version (v2.3 in 549950fb.2050...@sigaev.ru) of
> the patch that cached the lossyness of a page, but Teodor/David didn't
> find it to be beneficial in their benchmarking.

> Teodor's argument was basically that it's completely lost in the noise
> due to the much bigger overhead of rechecks.

That's a fair point, but on reflection it seems like a patch that covered
this case too wouldn't actually be any more complicated, so why not?
v2.3 is pretty brute-force and I agree it's not very attractive, but
I was thinking about something like

        BlockNumber cached_blkno = InvalidBlockNumber;
        PagetableEntry *page = NULL;

        inside loop:

                /* look up the target page unless we already have it */
                if (blk != cached_blkno)
                {
                        if (tbm_page_is_lossy(tbm, blk))
                                page = NULL;
                        else
                                page = tbm_get_pageentry(tbm, blk);
                        cached_blkno = blk;
                }
                if (page == NULL)
                        continue;  /* page is already marked lossy */

The "reset" after calling tbm_lossify() would just need to be
"cached_blkno = InvalidBlockNumber".

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to