[ 
https://issues.apache.org/jira/browse/HBASE-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12848810#action_12848810
 ] 

Jean-Daniel Cryans commented on HBASE-2283:
-------------------------------------------

This is from HBASE-1944 (see our use case there) and it is currently 
trunk-specific since it's a new feature that came along at the same time as 
group commit. It relies on the awaitNanos timer in HLog.LogSyncer.run to hflush 
entries that were appended but not flushed to the DNs. This is turned on by 
default in trunk (edits are less durable) after a vote came along in November 
and, if I remember correctly, Stack wasn't ok with the idea of much slower 
inserts out of the box compared to the 0.20 branch.

> row level atomicity 
> --------------------
>
>                 Key: HBASE-2283
>                 URL: https://issues.apache.org/jira/browse/HBASE-2283
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: Kannan Muthukkaruppan
>            Assignee: Kannan Muthukkaruppan
>            Priority: Blocker
>             Fix For: 0.20.4, 0.21.0
>
>         Attachments: rowLevelAtomicity_2283_v1.patch, 
> rowLevelAtomicity_2283_v2.patch, rowLevelAtomicity_2283_v3.patch
>
>
> The flow during a HRegionServer.put() seems to be the following. [For now, 
> let's just consider single row Put containing edits to multiple column 
> families/columns.]
> HRegionServer.put() does a:
>         HRegion.put();
>        syncWal()  (the HDFS sync call).  /* this is assuming we have HDFS-200 
> */
> HRegion.put() does a:
>   for each column family 
>   {
>       HLog.append(all edits to the colum family);
>       write all edits to Memstore;
>   }
> HLog.append() does a :
>   foreach edit in a single column family {
>     doWrite()
>   }
> doWrite() does a:
>    this.writer.append().
> There seems to be two related issues here that could result in 
> inconsistencies.
> Issue #1: A put() does a bunch of HLog.append() calls. These in turn do a 
> bunch of "write" calls on the underlying DFS stream.  If we crash after 
> having written out some append's to DFS, recovery will run and apply a 
> partial transaction to memstore.  
> Issue #2: The updates to memstore  should happen after the sync rather than 
> before. Otherwise, there is the danger that the write to DFS (sync) fails for 
> some reason & we return an error to the client, but we have already taken 
> edits to the memstore. So subsequent reads will serve uncommitted data.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to