[ 
https://issues.apache.org/jira/browse/HADOOP-1398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12560991#action_12560991
 ] 

tomwhite edited comment on HADOOP-1398 at 1/21/08 3:50 AM:
------------------------------------------------------------

bq. In the below from HStoreFile, blockCacheEnabled method argument is not 
being passed to the MapFile constructors.

Thanks - this had the effect of never enabling the cache! I've fixed this.

bq. Out of interest, did you regenerate the thrift or hand-edit it? Changes 
look right - just wondering.

I regenerated using the latest thrift trunk.

bq. Default ReferenceMap constructor makes for hard keys and soft values. If 
value has been let go by the GC, does the corresponding key just stay in the 
Map?

No, both the key and the value are removed from the map - I checked the source.

This patch also includes changes to HBase Shell so you can alter a table to 
enable block caching.

      was (Author: tomwhite):
    bq. In the below from HStoreFile, blockCacheEnabled method argument is not 
being passed to the MapFile constructors.

Thanks - this had the effect of never enabling the cache! I've fixed this.

bq. Out of interest, did you regenerate the thrift or hand-edit it? Changes 
look right - just wondering.

I regenerated using the latest thrift trunk.

bq. Default ReferenceMap constructor makes for hard keys and soft values. If 
value has been let go by the GC, does the corresponding key just stay in the 
Map?

Yes - I checked the source.

This patch also includes changes to HBase Shell so you can alter a table to 
enable block caching.
  
> Add in-memory caching of data
> -----------------------------
>
>                 Key: HADOOP-1398
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1398
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: contrib/hbase
>            Reporter: Jim Kellerman
>            Priority: Trivial
>         Attachments: commons-collections-3.2.jar, hadoop-blockcache-v2.patch, 
> hadoop-blockcache-v3.patch, hadoop-blockcache-v4.patch, 
> hadoop-blockcache.patch
>
>
> Bigtable provides two in-memory caches: one for row/column data and one for 
> disk block caches.
> The size of each cache should be configurable, data should be loaded lazily, 
> and the cache managed by an LRU mechanism.
> One complication of the block cache is that all data is read through a 
> SequenceFile.Reader which ultimately reads data off of disk via a RPC proxy 
> for ClientProtocol. This would imply that the block caching would have to be 
> pushed down to either the DFSClient or SequenceFile.Reader

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to