Generally, It's network issue to root cause these problems.
On Thu, Jul 4, 2013 at 12:12 AM, Patrick Schless <[email protected]>wrote: > Thanks for the tip, Himanshu. > > I'm on 0.92.1 (cdh 4.1.2), so I imagine I don't have 7122. > > > On Tue, Jul 2, 2013 at 6:00 PM, Himanshu Vashishtha <[email protected] > >wrote: > > > Patrick, > > > > What is the HBase version you using for Master cluster? If < 0.94.8, does > > it has 7122? https://issues.apache.org/jira/browse/HBASE-7122 > > > > Thanks, > > Himanshu > > > > > > On Tue, Jul 2, 2013 at 3:09 PM, Patrick Schless > > <[email protected]>wrote: > > > > > I've just enabled replication (to 1 peer), and I'm seeing a bunch of > > > errors, along the lines of [1]. Replication does seem to work, though > > (data > > > is showing up in the standby cluster). > > > > > > The file exists (I can see it in the HDFS web GUI), but it seems be > > empty. > > > > > > Is this an error I need to worry about? > > > > > > Thanks, > > > Patrick > > > > > > [1] 2013-07-02 16:50:36,275 WARN > > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: 1 > Got > > > EOF while reading, looks like this file is broken? hdfs:// > > > > > > > > > name-node.domain.com:8020/hbase/.logs/data-xbt.domain.com,60020,1372796067408/data-xbt.domain.com%2C60020%2C1372796067408.1372799669085 > > > 2013-07-02 16:50:36,275 DEBUG > > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: > > Nothing > > > to replicate, sleeping 1000 times 10 > > > 2013-07-02 16:50:46,275 DEBUG > > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: > > Opening > > > log for replication data-xbt.domain.com > > > %2C60020%2C1372796067408.1372799669085 > > > at 50573824 > > > 2013-07-02 16:50:46,325 WARN > > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: 1 > > Got: > > > java.io.EOFException: hdfs:// > > > > > > > > > name-node.domain.com:8020/hbase/.logs/data-xbt.domain.com,60020,1372796067408/data-xbt.domain.com%2C60020%2C1372796067408.1372799669085 > > > , > > > entryStart=52524475, pos=52524544, end=52524544, edit=11033 > > > at > sun.reflect.GeneratedConstructorAccessor34.newInstance(Unknown > > > Source) > > > at > > > > > > > > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > > > at > > java.lang.reflect.Constructor.newInstance(Constructor.java:532) > > > at > > > > > > > > > org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:252) > > > at > > > > > > > > > org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:208) > > > at > > > > > > > > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.readAllEntriesToReplicateOrNextFile(ReplicationSource.java:427) > > > at > > > > > > > > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:306) > > > Caused by: java.io.EOFException > > > at java.io.DataInputStream.readFully(DataInputStream.java:197) > > > at > > > > > > > > > org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:68) > > > at > > > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:106) > > > at > > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2294) > > > at > > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2193) > > > at > > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2239) > > > at > > > > > > > > > org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206) > > > ... 2 more > > > > > >
