[
https://issues.apache.org/jira/browse/HBASE-21149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649606#comment-16649606
]
Ted Yu edited comment on HBASE-21149 at 10/16/18 1:11 AM:
--
For hadoop 3.1, when multiple files are included in one DistCp session
(specified by the listing file), they (the chunks in DistCp's terminology)
would be concatenated by CopyCommitter#concatFileChunks .
CopyCommitter#concatFileChunks would throw the following exception when trying
to concatenate the two bulk loaded hfiles:
{code}
2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590):
job_local1795473782_0004
java.io.IOException: Inconsistent sequence file: current chunk file
org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry
org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
length = 5142 aclEntries = null, xAttrs = null}
at
org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
at
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
{code}
But the two bulk loaded hfiles are independent.
This results in -999 being returned by DistCp.
was (Author: yuzhih...@gmail.com):
CopyCommitter#concatFileChunks would throw the following exception when trying
to concatenate the two bulk loaded hfiles:
{code}
2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590):
job_local1795473782_0004
java.io.IOException: Inconsistent sequence file: current chunk file
org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry
org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
length = 5142 aclEntries = null, xAttrs = null}
at
org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
at
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
{code}
> TestIncrementalBackupWithBulkLoad may fail due to file copy failure
> ---
>
> Key: HBASE-21149
> URL: https://issues.apache.org/jira/browse/HBASE-21149
> Project: HBase
> Issue Type: Test
> Components: backup&restore
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: 21149.v2.txt, HBASE-21149-v1.patch,
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> From
> https://builds.apache.org/job/HBase%20Nightly/job/master/471/testReport/junit/org.apache.hadoop.hbase.backup/TestIncrementalBackupWithBulkLoad/TestIncBackupDeleteTable/
> :
> {code}
> 2018-09-03 11:54:30,526 ERROR [Time-limited test]
> impl.TableBackupClient(235): Unexpected Exception : Failed copy from
> hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/0f626c66493649daaf84057b8dd71a30_SeqId_205_,hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/ad8df6415bd9459d9b3df76c588d79df_SeqId_205_
> to hdfs://localhost:53075/backupUT/backup_1535975655488
> java.io.IOException: Failed copy from
> hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/0f626c66493649daaf84057b8dd71a30_SeqId_205_,hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/ad8df6415bd9459d9b3df76c588d79df_SeqId_205_
> to hdfs://localhost:53075/backupUT/backup_1535975655488
> at
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:351)
>