org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread anil gupta
, table=DE.CONFIG_DATA, attempt=30/35 failed=38ops, last exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread Ted Yu
table=DE.CONFIG_DATA, > attempt=30/35 failed=38ops, last exception: > org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: > org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append > sequenceId=8689, requesting roll of WAL > at > org.apac

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread anil gupta
Hey Ted, This is what i see in one of region server log(NPE at the bottom): 2017-07-06 19:07:07,778 INFO [ip-10-74-5-153.us-west-2.compute.internal,16020,1499320260501_ChoreService_1] regionserver.HRegionServer: ip-10-74-5-153.us-west-2.compute.internal,16020,1499320260501-MemstoreFlusherChore req

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread M. Aaron Bossert
>> attempt=30/35 failed=38ops, last exception: >> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: >> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append >> sequenceId=8689, requesting roll of WAL >> at >> org.apache.

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread Ted Yu
Which hadoop release are you using ? In FSOutputSummer.java, I see the following around line 106: checkClosed(); if (off < 0 || len < 0 || off > b.length - len) { throw new ArrayIndexOutOfBoundsException(); You didn't get ArrayIndexOutOfBoundsException - maybe b was null ? On Thu

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread anil gupta
Thanks for the pointers Aaron. We checked hdfs. Its reporting 0 underreplicated or corrupted blocks. @Ted: we are using Hadoop 2.7.3(EMR5.7.2) On Thu, Jul 6, 2017 at 4:49 PM, Ted Yu wrote: > Which hadoop release are you using ? > > In FSOutputSummer.java, I see the following around line 106: >