[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15559308#comment-15559308
 ] 

Ben Manes commented on HBASE-15560:
-----------------------------------

1. I performed "git revert b952e64"
2. Configured YCSB workload B with the settings,
{code}
recordcount=100000
operationcount=1000000
{code}
3. Started hbase server with the {{hbase-site.xml}} configuration,
{code:xml}
<property>
 <name>hfile.block.cache.size</name>
 <value>0.1f</value>
</property>
<property>
 <name>hbase.regionserver.global.memstore.size</name>
 <value>0.7f</value>
</property>
<property>
 <name>hfile.block.cache.policy</name>
 <value>Lru</value>
</property>
{code}
4. [Loaded and ran 
ycsb|https://github.com/brianfrankcooper/YCSB/tree/master/hbase098] with Lru 
and TinyLfu.

h4. LruBlockCache

{code}
totalSize=96.67 MB, freeSize=2.32 MB, max=98.99 MB, blockCount=1793, 
accesses=4766387, hits=4081322, hitRatio=85.63%, 
cachingAccesses=4764133, cachingHits=4081322, cachingHitsRatio=85.67%, 
evictions=10402, evicted=681017, evictedPerRun=65.46981349740435
{code}

{code}
[OVERALL], RunTime(ms), 189753.0
[OVERALL], Throughput(ops/sec), 5270.008906315052
[TOTAL_GCS_PS_Scavenge], Count, 717.0
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 730.0
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.38471065016099876
[TOTAL_GCS_PS_MarkSweep], Count, 0.0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 717.0
[TOTAL_GC_TIME], Time(ms), 730.0
[TOTAL_GC_TIME_%], Time(%), 0.38471065016099876
[READ], Operations, 950125.0
[READ], AverageLatency(us), 152.8599626364952
[READ], MinLatency(us), 76.0
[READ], MaxLatency(us), 60959.0
[READ], 95thPercentileLatency(us), 215.0
[READ], 99thPercentileLatency(us), 253.0
[READ], Return=OK, 950125
[CLEANUP], Operations, 2.0
[CLEANUP], AverageLatency(us), 72164.0
[CLEANUP], MinLatency(us), 8.0
[CLEANUP], MaxLatency(us), 144383.0
[CLEANUP], 95thPercentileLatency(us), 144383.0
[CLEANUP], 99thPercentileLatency(us), 144383.0
[UPDATE], Operations, 49875.0
[UPDATE], AverageLatency(us), 215.8185664160401
[UPDATE], MinLatency(us), 125.0
[UPDATE], MaxLatency(us), 36159.0
[UPDATE], 95thPercentileLatency(us), 294.0
[UPDATE], 99thPercentileLatency(us), 484.0
[UPDATE], Return=OK, 49875
{code}

h4. TinyLfuBlockCache
{code}
totalSize=98.98 MB, freeSize=4.07 KB, max=98.99 MB, blockCount=2112,
accesses=4170109, hits=3794003, hitRatio=90.98%, 
cachingAccesses=4170112, cachingHits=3794005, cachingHitsRatio=90.98%, 
evictions=373994, evicted=37399
{code}

{code}
[OVERALL], RunTime(ms), 118390.0
[OVERALL], Throughput(ops/sec), 8446.659346228567
[TOTAL_GCS_PS_Scavenge], Count, 664.0
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 714.0
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.6030914773207197
[TOTAL_GCS_PS_MarkSweep], Count, 0.0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 664.0
[TOTAL_GC_TIME], Time(ms), 714.0
[TOTAL_GC_TIME_%], Time(%), 0.6030914773207197
[READ], Operations, 949956.0
[READ], AverageLatency(us), 112.233432916893
[READ], MinLatency(us), 75.0
[READ], MaxLatency(us), 61151.0
[READ], 95thPercentileLatency(us), 165.0
[READ], 99thPercentileLatency(us), 204.0
[READ], Return=OK, 949956
[CLEANUP], Operations, 2.0
[CLEANUP], AverageLatency(us), 59732.0
[CLEANUP], MinLatency(us), 8.0
[CLEANUP], MaxLatency(us), 119487.0
[CLEANUP], 95thPercentileLatency(us), 119487.0
[CLEANUP], 99thPercentileLatency(us), 119487.0
[UPDATE], Operations, 50044.0
[UPDATE], AverageLatency(us), 188.9981216529454
[UPDATE], MinLatency(us), 122.0
[UPDATE], MaxLatency(us), 36671.0
[UPDATE], 95thPercentileLatency(us), 257.0
[UPDATE], 99thPercentileLatency(us), 489.0
[UPDATE], Return=OK, 50044
{code}

> TinyLFU-based BlockCache
> ------------------------
>
>                 Key: HBASE-15560
>                 URL: https://issues.apache.org/jira/browse/HBASE-15560
>             Project: HBase
>          Issue Type: Improvement
>          Components: BlockCache
>    Affects Versions: 2.0.0
>            Reporter: Ben Manes
>            Assignee: Ben Manes
>         Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to