Lol, If they have optimized it by removing the LRU nature, it was perhaps overzealous, or perhaps your workload is such that it fits within the RAM cache so replacement is not an issue. Without the LRU there are approximately the same number of buckets as objects, so replacement based on bucket would be largely random. Instead we can have a partitioned LRU, and we are probably going to have to go that route as it looks like lock contention in the cache is pretty bad. That's the next area I was thinking of looking at...
cheers, john On Sat, Dec 29, 2012 at 7:59 PM, Yunkai Zhang <[email protected]> wrote: > On Sun, Dec 30, 2012 at 1:57 AM, John Plevyak <[email protected]> wrote: > > > This code in ::put() implements the LRU, and as you can see, it uses the > > LRU data structure (i.e. simple list from most recently used to least): > > > > while (bytes > max_bytes) { > > RamCacheLRUEntry *ee = lru.dequeue(); > > if (ee) > > remove(ee); > > else > > break; > > } > > > > It seems that the code I read had been changed on this section you showed > above, as my coworker have optimized it a bit. But thanks for your > explanation all the same. > > > > > > > > > > On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang <[email protected]> > wrote: > > > > > Hi folks: > > > > > > I'm reading code about RamCacheLRU, but I was confused by > > RamCacheLRU->lru > > > queue defined as following: > > > > > > struct RamCacheLRU: public RamCache { > > > ... > > > Que(RamCacheLRUEntry, lru_link) lru; > > > DList(RamCacheLRUEntry, hash_link) *bucket; > > > ... > > > }; > > > > > > By reading put/get/remove functions of RamCacheLRU class, it seems > that > > > LRU algorithm was implemented by accessing *bucket* list instead of > *lru* > > > queue. > > > > > > Do I understand it correctly? if so, we can remove lru queue and > relative > > > code to speed up the put/get function of LRU a bit. > > > > > > -- > > > Yunkai Zhang > > > Work at Taobao > > > > > > > > > -- > Yunkai Zhang > Work at Taobao >
