Paul Vixie via Unbound-users wrote: > you're using LRU replacement, and these records are never accessed. > therefore while they can push other more vital things out of the cache, > decreasing cache hit rate, they should be primary targets for replacement > whenever other data is looking for a place to land.
Cache-busting "trash" records are accessed once (at the time of the cache miss/fill), not never, which puts them at the most recently used side of the LRU list at the time they enter the cache. They have to traverse the entire LRU list before they become candidates for eviction, at least in standard LRU. The hottest records in the cache (www.google.com, m.facebook.com, etc.) are not at risk of being pushed out because they will be accessed again very soon, but a significant amount of cache-busting lookups can cause records in the lukewarm middle of the cache to be evicted. There is probably a tendency to oversize the cache when provisioning a production caching DNS server, because memory has been cheap compared to the size of DNS records for a long time, and the more memory you throw at a caching DNS server the higher the cache hit ratio goes. That means the LRU list is very long in a server with a full cache and it takes a while for new records entering the cache to push cache-busting records to the end of the LRU list. I think it would be interesting if a caching DNS server implemented a small modification to LRU called "segmented LRU" [0], which splits the LRU cache into "protected" and "probationary" segments. By restricting the size of the "probationary" segment, the "protected" segment is preserved for data that has been accessed multiple times. I suspect this would result in higher cache ratios and smaller cache sizes under real world DNS use. [0] https://en.wikipedia.org/wiki/Cache_replacement_policies#Segmented_LRU_(SLRU) -- Robert Edmonds [email protected]
