[ 
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15043664#comment-15043664
 ] 

Duo Zhang commented on HBASE-14004:
-----------------------------------

{quote}
This will require big change in how replication works but for the better and 
replication will be less resource intense because less NN ops (if crash, we ask 
NN for file length, not ZK? If so, this would be a task we have been needing to 
do for a long time; i.e. undo keeping replication position in zk).
{quote}
I think we should have two branches to determine how many entries can we read. 
One is for closed WAL file, one is for the WAL still being written. We can get 
this information using {{DistributedFileSystem.isFileClosed}}. If the file is 
already closed, then we could use the length that gotten from HDFS. If the file 
is still opened for writting, then we should ask the rs who is writing it for 
the safe length. If we can not find the rs(maybe it has already crashed), then 
we could wait a minute since namenode will finally recover its lease and close 
the file.

{quote}
There is such a sequenceid but it is by-region, not global. Could keep sequence 
id by region accounts? (We already do this elsewhere).
{quote}
So maybe we still need to use "acked length", not "acked id". But this is 
enough to filter out duplicate WAL entries I think.

> [Replication] Inconsistency between Memstore and WAL may result in data in 
> remote cluster that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between 
> memstore/hfile and WAL which cause the slave cluster has more data than the 
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  
> (may partially) transported to the DNs which finally get persisted. As a 
> result, the handler will rollback the Memstore and the later flushed HFile 
> will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to