[ https://issues.apache.org/jira/browse/HBASE-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094465#comment-16094465 ]
Anoop Sam John commented on HBASE-15248: ---------------------------------------- So what we save corresponding to one block is this blocks data (cells) + header of this block (33 bytes) + Block meta data (13 bytes). Correct [~Stack]? HBASE-15477 already removed the saving of next block's header while writing to cache. > BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a > BucketCache 'block' of 4k > --------------------------------------------------------------------------------------------- > > Key: HBASE-15248 > URL: https://issues.apache.org/jira/browse/HBASE-15248 > Project: HBase > Issue Type: Sub-task > Components: BucketCache > Reporter: stack > > Chatting w/ a gentleman named Daniel Pol who is messing w/ bucketcache, he > wants blocks to be the size specified in the configuration and no bigger. His > hardware set ups fetches pages of 4k and so a block that has 4k of payload > but has then a header and the header of the next block (which helps figure > whats next when scanning) ends up being 4203 bytes or something, and this > then then translates into two seeks per block fetch. > This issue is about what it would take to stay inside our configured size > boundary writing out blocks. > If not possible, give back better signal on what to do so you could fit > inside a particular constraint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)