[ 
https://issues.apache.org/jira/browse/HBASE-15669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249567#comment-15249567
 ] 

Anoop Sam John commented on HBASE-15669:
----------------------------------------

bq.The same thing came up when we were working on the main jira (HBASE-13153), 
but we are not sure if in future an edit can contain a mix of mutation and bulk 
load marker cells. If that happens then it will break the replication. So to 
avoid that we are handling it in that way
How that will be possible really?  I dont think so..  It will be of great help 
while handling the normal WALEdits (for normal writes).  That may contain many 
cells and the Q check can be limited to 1 per edit. Now it will be so many 
compares to handle the bulk load replication.  We even had a boolean to enable 
the bulk load replication right?  Even that check also not done here?    We 
should not be adding so many unwanted compare ops.

> HFile size is not considered correctly in a replication request
> ---------------------------------------------------------------
>
>                 Key: HBASE-15669
>                 URL: https://issues.apache.org/jira/browse/HBASE-15669
>             Project: HBase
>          Issue Type: Bug
>          Components: Replication
>    Affects Versions: 1.3.0
>            Reporter: Ashish Singhi
>            Assignee: Ashish Singhi
>             Fix For: 2.0.0, 1.3.0, 1.4.0
>
>         Attachments: HBASE-15669.patch
>
>
> In a single replication request from source cluster a RS can send either at 
> most {{replication.source.size.capacity}} size of data or 
> {{replication.source.nb.capacity}} entries. 
> The size is calculated by considering the cells size in each entry which will 
> get calculated wrongly in case of bulk loaded data replication, in this case 
> we need to consider the size of hfiles not cell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to