[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927540#comment-15927540
 ] 

ramkrishna.s.vasudevan commented on HBASE-16438:
------------------------------------------------

bq.ByteBufferChunkCell should be in hbase-server beside the MSLAB rather than 
out in hbase-common? It is a server-side only thing? Ditto with copyToChunkCell 
and ChunkCell? What you think?
Yes I agree. 
bq.private long id = -1;
It is passed on Construction. Can it change during lifetime of the Cell?
Can make it final. It won't change. It is id only not an offset. Just an unique 
number.
bq.Will it ever be the case that Cells from Chunks are persisted across 
restarts? Say, in a bucketcache that is persisted? Just wondering if the 
chunkid needs to be unique across restarts?
I don't think so. We want any chunk's life time to be available till the 
segment having the chunk is flushed. So even if it is from pool we need to know 
the chunkId till the flush happens.
bq.We have MSLABChunkCreator. So a Chunk is a 'piece' of a MSLAB? And a 
'MSLABChunkCreateor' creates chunks or allocates pieces of the MSLAB? Do we 
have to have MSLAB in the name? Is it MSLAB only?
I like the last question . MSLAB is created per segment and each MSLAB can have 
more than one Chunk. So it is not one to one here.
So now we try to designate a ChunkCreator who does the creation and management 
of these chunks.
bq.We keep a chunkIdMap? Is this of all chunks? How many chunks will there be? 
All threads will be banging on this Map?
In case of ChunkPool we are limited to the pool size. but if there is no 
chunkPool then I think we will be having more number of chunks created on 
demand. That is why we have the logic of removing these chunks Ids from the map 
when there is no pool when we do the close() of a segment so that we are sure 
that we no longer need those chunkIds. In case of pool we cannot do that as we 
reuse the chunks.
bq.Regards the below, who sets forceOnHeapOnly? Rather should we pass in the 
Cell and let the allocator figure where to allocate the memory?
>From the Cell we cannot tell because in case of ChunkPool once we run out of 
>its max size the MSLAB decides where to create the chunk and not the cell.
bq.So if no chunk pool, we keep chunk ids in a map elsewhere than in 
MSLABChunkCreateor?
am not sure if you have seen [~anastas]'s concern. Infact I thought one 
MSLABChunkCreator is enought and we can pass the singleton ref with MSLAB and 
the pool. So that chunkCreator decides on chunkCreation but the responsibility 
is with MSLAB to either as the pool or chunkCreator directly. But she feels 
that is not right and better to refactor fully in such a way that only 
ChunkCreator does chunkCreation and init(which is the costly one) and with that 
change all the CAS operation and the way it works out. 

> Create a cell type so that chunk id is embedded in it
> -----------------------------------------------------
>
>                 Key: HBASE-16438
>                 URL: https://issues.apache.org/jira/browse/HBASE-16438
>             Project: HBase
>          Issue Type: Sub-task
>    Affects Versions: 2.0.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>         Attachments: HBASE-16438_1.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to