[ 
https://issues.apache.org/jira/browse/HBASE-29146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-29146.
-------------------------------
    Fix Version/s: 2.7.0
                   3.0.0-beta-2
                   2.6.3
     Hadoop Flags: Reviewed
       Resolution: Fixed

> Incremental backups can fail due to not cleaning up the MR bulkload output 
> directory
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-29146
>                 URL: https://issues.apache.org/jira/browse/HBASE-29146
>             Project: HBase
>          Issue Type: Bug
>          Components: backup&restore
>            Reporter: Hernan Gelaf-Romer
>            Assignee: Hernan Gelaf-Romer
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 2.7.0, 3.0.0-beta-2, 2.6.3
>
>
> At the moment, if a bulkload file is archived as we're backing it up, or if 
> there are archive files that need to be loaded, we run into the following 
> exception
>  
> Unexpected Exception : org.apache.hadoop.mapred.FileAlreadyExistsException
>  
> The happens because we're running MR jobs to re-split these HFiles, and we 
> cannot run subsequent MR jobs against the same target bulk output directory 
> without first cleaning that directory up. We should make sure we properly 
> clean up the bulkload output directory in between runs of 
> MapReduceHFileSplitterJob 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to