[
https://issues.apache.org/jira/browse/KUDU-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17926913#comment-17926913
]
ASF subversion and git services commented on KUDU-613:
------------------------------------------------------
Commit 1d46b2fcdba6b30c52ebbba8725a16d749e4f857 in kudu's branch
refs/heads/master from Mahesh Reddy
[ https://gitbox.apache.org/repos/asf?p=kudu.git;h=1d46b2fcd ]
KUDU-613: Fix BlockCache Constructor
The capacity constraints are not calculated properly
when creating a block cache with the slru eviction
policy. This patch fixes this miscalculation.
Change-Id: Icfde56fd766ba7160052e88ca09a63845f3297c6
Reviewed-on: http://gerrit.cloudera.org:8080/22478
Reviewed-by: Alexey Serbin <[email protected]>
Tested-by: Alexey Serbin <[email protected]>
> Scan-resistant cache replacement algorithm for the block cache
> --------------------------------------------------------------
>
> Key: KUDU-613
> URL: https://issues.apache.org/jira/browse/KUDU-613
> Project: Kudu
> Issue Type: Improvement
> Components: perf
> Affects Versions: M4.5
> Reporter: Andrew Wang
> Assignee: Mahesh Reddy
> Priority: Major
> Labels: performance, roadmap-candidate
>
> The block cache currently uses LRU, which is vulnerable to large scan
> workloads. It'd be good to implement something like 2Q.
> ARC (patent encumbered, but good for ideas):
> https://www.usenix.org/conference/fast-03/arc-self-tuning-low-overhead-replacement-cache
> HBase (2Q like):
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
--
This message was sent by Atlassian Jira
(v8.20.10#820010)