[
https://issues.apache.org/jira/browse/HADOOP-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HADOOP-1903:
--------------------------
Component/s: contrib/hbase
> [hbase] Possible data loss if Exception happens between snapshot and flush to
> disk.
> -----------------------------------------------------------------------------------
>
> Key: HADOOP-1903
> URL: https://issues.apache.org/jira/browse/HADOOP-1903
> Project: Hadoop
> Issue Type: Bug
> Components: contrib/hbase
> Reporter: stack
> Assignee: stack
> Priority: Minor
> Fix For: 0.15.0
>
> Attachments: 1903.patch
>
>
> There exists a little window during which we can lose data. During a
> memcache flush, we make an inmemory copy, a 'snapshot'. The memcache is then
> zeroed and off we go again taking updates. Meantime, in background we are
> supposed to flush the snapshot to disk. If this process is interrupted --
> e.g. the HDFS is yanked from under us or if an OOME occurs in this thread --
> then the content of the snapshot is lost.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.