[
https://issues.apache.org/jira/browse/HBASE-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13616141#comment-13616141
]
Jieshan Bean commented on HBASE-8207:
-------------------------------------
We found the same problem in our test environment, attaching the logs for your
reference:
{noformat}
2013-03-25 04:51:20,929 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
NB dead servers : 4
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:517)
2013-03-25 04:51:20,929 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/130,60020,1364199883591/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,930 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/130,60020,1364199883591-splitting/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,932 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/0/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,934 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/0-splitting/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,935 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/172/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,937 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/172-splitting/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,938 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/160/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,939 INFO
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
Possible location
hdfs://hacluster/hbase/.logs/160-splitting/160-172-0-130%252C60020%252C1364199883591.1364200564291
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:528)
2013-03-25 04:51:20,941 WARN
[ReplicationExecutor-0.replicationSource,1-160-172-0-130,60020,1364199883591]
1-160-172-0-130,60020,1364199883591 Got:
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:563)
java.io.IOException: File from recovered queue is nowhere to be found
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:545)
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:311)
Caused by: java.io.FileNotFoundException: File does not exist:
hdfs://hacluster/hbase/.oldlogs/160-172-0-130%2C60020%2C1364199883591.1364200564291
at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:752)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1692)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1716)
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:55)
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:177)
at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:728)
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:67)
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:511)
... 1 more
{noformat}
> Replication could have data loss when machine name contains hyphen "-"
> ----------------------------------------------------------------------
>
> Key: HBASE-8207
> URL: https://issues.apache.org/jira/browse/HBASE-8207
> Project: HBase
> Issue Type: Bug
> Components: Replication
> Affects Versions: 0.95.0, 0.94.6
> Reporter: Jeffrey Zhong
> Assignee: Jeffrey Zhong
> Priority: Critical
> Fix For: 0.95.0, 0.98.0, 0.94.7
>
> Attachments: failed.txt, HBASE-8212-94.patch
>
>
> In the recent test case TestReplication* failures, I'm finally able to find
> the cause(or one of causes) for its intermittent failures.
> When a machine name contains "-", it breaks the function
> ReplicationSource.checkIfQueueRecovered. It causes the following issue:
> deadRegionServers list is way off so that replication doesn't wait for log
> splitting finish for a wal file and move on to the next one(data loss)
> You can see that replication use those weird paths constructed from
> deadRegionServers to check a file existence
> {code}
> 2013-03-26 21:26:51,385 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/1.compute.internal,52170,1364333181125/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,386 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/1.compute.internal,52170,1364333181125-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,387 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/west/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,389 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/west-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,391 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/156.us/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,394 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/156.us-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,396 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/0/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,398 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/0-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> {code}
> This happened in the recent test failure in
> http://54.241.6.143/job/HBase-0.94/org.apache.hbase$hbase/21/testReport/junit/org.apache.hadoop.hbase.replication/TestReplicationQueueFailover/queueFailover/?auto_refresh=false
> Search for
> {code}
> File does not exist:
> hdfs://localhost:52882/user/ec2-user/hbase/.oldlogs/ip-10-197-0-156.us-west-1.compute.internal%2C52170%2C1364333181125.1364333199540
> {code}
> After 10 times retries, replication source gave up and move on to the next
> file. Data loss happens.
> Since lots of EC2 machine names contain "-" including our Jenkin servers,
> this is a high impact issue.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira