Github user ben-manes commented on the issue:

    https://github.com/apache/storm/pull/1783
  
    I was a co-author of Guava's cache, too. 
    
    Guava had originally considered soft references an ideal caching scheme, 
since they offer great concurrency and GC is for memory management. That 
evolved from `ReferenceMap` to `MapMaker` to optimize space, especially in 
regards to computations (no need for a `Future` wrapper). Unfortunately soft 
references result in awful performance outside of a micro-benchmark due to 
causing full GCs. In parallel, I had been experimenting with approaches for a 
concurrent LRU cache 
([CLHM](https://github.com/ben-manes/concurrentlinkedhashmap)) and when joining 
Google helped to retrofitted its ideas onto Guava. There was a lot of good that 
came out of that, but I left before working on optimizing for performance.
    
    Java 8 provided an excuse to start from scratch. Caffeine is much faster 
and packs in even more features. I also spent time exploring eviction policies, 
which led to co-authoring a paper on a new technique called TinyLFU. That has a 
near optimal hit rate, low memory footprint, and amortized O(1) overhead. This 
is done by tracking frequency in a popularity sketch. The same concurrency 
model in CLHM and Guava is used (inspired by a write-ahead log), which allows 
for concurrent O(1) reads and writes.
    
    The [HighScalability 
article](http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html)
 provides an overview of the algorithms that I use.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to