[ 
https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977769#comment-14977769
 ] 

Duo Zhang commented on HBASE-14004:
-----------------------------------

The problem here not only effects replication.

{quote}
As a result, the handler will rollback the Memstore and the later flushed HFile 
will also skip this record.
{quote}

What if the regionserver crashed before flushing HFile? I think the record will 
come back since it has already been persisted in WAL.

Add a marker maybe a solution, but you need to check the marker everywhere when 
replaying WAL, and you still need to deal with the failure when placing 
marker... I do not think it is easy to do...

The basic problem here is we may have inconsistency between memstore and WAL 
when we fail to sync WAL.
A simple solution is killing the regionserver when we fail to sync WAL which 
means we will never rollback memstore but reconstruct it using WAL. We can make 
sure there is no difference between memstore and WAL under this situation.
If we want to keep regionserver alive when syncing failed, then I think we need 
to find the real result of the sync operation. Maybe we could close the WAL 
file and check its length? Of course, if we have lost the connection to 
namenode, I think there is no simple solution other than killing the 
regionserver...

Thanks.

> [Replication] Inconsistency between Memstore and WAL may result in data in 
> remote cluster that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between 
> memstore/hfile and WAL which cause the slave cluster has more data than the 
> master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  
> (may partially) transported to the DNs which finally get persisted. As a 
> result, the handler will rollback the Memstore and the later flushed HFile 
> will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to