[
https://issues.apache.org/jira/browse/HBASE-288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tom White updated HBASE-288:
----------------------------
Attachment: hadoop-blockcache-v7.patch
A new patch (v7) with just the HBase parts in it. I successfully ran the HBase
unit tests with this patch by using a Hadoop Core 0.16 jar that had been
patched with the MapFile and SequenceFile changes in core trunk.
This can be applied to trunk after the branch is created.
Jim/Stack/Bryan: Sorry about the extra work I caused you by committing too
early!
> Add in-memory caching of data
> -----------------------------
>
> Key: HBASE-288
> URL: https://issues.apache.org/jira/browse/HBASE-288
> Project: Hadoop HBase
> Issue Type: Bug
> Components: regionserver
> Reporter: Jim Kellerman
> Assignee: Jim Kellerman
> Priority: Trivial
> Attachments: commons-collections-3.2.jar, hadoop-blockcache-v2.patch,
> hadoop-blockcache-v3.patch, hadoop-blockcache-v4.1.patch,
> hadoop-blockcache-v4.patch, hadoop-blockcache-v5.patch,
> hadoop-blockcache-v6.patch, hadoop-blockcache-v7.patch,
> hadoop-blockcache.patch
>
>
> Bigtable provides two in-memory caches: one for row/column data and one for
> disk block caches.
> The size of each cache should be configurable, data should be loaded lazily,
> and the cache managed by an LRU mechanism.
> One complication of the block cache is that all data is read through a
> SequenceFile.Reader which ultimately reads data off of disk via a RPC proxy
> for ClientProtocol. This would imply that the block caching would have to be
> pushed down to either the DFSClient or SequenceFile.Reader
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.