On Sat, 4 Nov 2023 at 22:08, Andrey M. Borodin <x4...@yandex-team.ru> wrote:

> On 30 Oct 2023, at 09:20, Dilip Kumar <dilipbal...@gmail.com> wrote:
>
> changed the logic of SlruAdjustNSlots() in 0002, such that now it
> starts with the next power of 2 value of the configured slots and
> keeps doubling the number of banks until we reach the number of banks
> to the max SLRU_MAX_BANKS(128) and bank size is bigger than
> SLRU_MIN_BANK_SIZE (8).  By doing so, we will ensure we don't have too
> many banks
>
> There was nothing wrong with having too many banks. Until bank-wise locks
> and counters were added in later patchsets.
> Having hashtable to find SLRU page in the buffer IMV is too slow. Some
> comments on this approach can be found here [0].
> I'm OK with having HTAB for that if we are sure performance does not
> degrade significantly, but I really doubt this is the case.
> I even think SLRU buffers used HTAB in some ancient times, but I could not
> find commit when it was changed to linear search.
>
> Maybe we could decouple locks and counters from SLRU banks? Banks were
> meant to be small to exploit performance of local linear search. Lock
> partitions have to be bigger for sure.
>

Is there a particular reason why lock partitions need to be bigger? We have
one lock per buffer anyway, bankwise locks will increase the number of
locks < 10%.

I am working on trying out a SIMD based LRU mechanism that uses a 16 entry
bank. The data layout is:

struct CacheBank {
    int page_numbers[16];
    char access_age[16];
}

The first part uses up one cache line, and the second line has 48 bytes of
space left over that could fit a lwlock and page_status, page_dirty arrays.

Lookup + LRU maintenance has 20 instructions/14 cycle latency and the only
branch is for found/not found. Hoping to have a working prototype of SLRU
on top in the next couple of days.

Regards,
Ants Aasma

Reply via email to