Hit Send too soon.

More information is needed before we know whether fix for HBASE-11234 would
solve the problem you're facing.

Cheers


On Thu, Jul 24, 2014 at 9:51 PM, Ted Yu <[email protected]> wrote:

> How often do you see this error ?
> Is it possible to encapsulate the scenario using unit test ?
>
> I am not certain HBASE-11234
>
> On Jul 24, 2014, at 9:37 PM, "[email protected]" <[email protected]> wrote:
>
> > i use cdh hbase 0.96.1.1,
> http://archive.cloudera.com/cdh5/cdh/5/hbase-0.96.1.1-cdh5.0.2.releasenotes.html
> > no this patch
> >
> > so i add this path and rebuid hbase,restart regionServer will solve ths
> problem?don't need to empty the data
> >
> > thanks
> >
> >
> > [email protected]
> >
> > From: Ted Yu
> > Date: 2014-07-25 13:20
> > To: [email protected]
> > Subject: Re: hbase server exception
> > The error indicates there was bug in prefix tree module.
> >
> > Does the 0.96 release you use include fix for HBASE-11234 ?
> >
> > Cheers
> >
> >
> > On Thu, Jul 24, 2014 at 9:11 PM, [email protected] <[email protected]>
> wrote:
> >
> >> my hbase version is 0.96,and regionServer throw this exception,is this a
> >> bug?
> >>
> >> 2014-07-25 13:04:07,494 ERROR
> >> [regionserver60020-largeCompactions-1406165462536]
> >> regionserver.CompactSplitThread: Compaction failed Request =
> >>
> regionName=t2013,15100405147|1386470415000|472507379770398,1406208607841.763f79d0ed1d836b5748b70bf8a918cc.,
> >> storeName=b, fileCount=10, fileSize=10.3 G (6.7 G, 3.1 G, 432.1 M, 70.1
> M,
> >> 7.7 M, 4.8 M, 7.7 M, 132.1 K, 7.4 M, 7.7 M), priority=2099999901,
> >> time=2069097954607173
> >> java.lang.ArrayIndexOutOfBoundsException
> >> at org.apache.hadoop.hbase.CellUtil.copyValueTo(CellUtil.java:105)
> >> at
> >>
> org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:115)
> >> at
> >>
> org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:90)
> >> at
> >>
> org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:74)
> >> at
> >>
> org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.getKeyValue(PrefixTreeSeeker.java:98)
> >> at
> >>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getKeyValue(HFileReaderV2.java:1070)
> >> at
> >>
> org.apache.hadoop.hbase.io.HalfStoreFileReader$1.getKeyValue(HalfStoreFileReader.java:143)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:150)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:240)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:200)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:259)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:103)
> >> at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:980)
> >> at
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1415)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:475)
> >> at
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> at
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> at java.lang.Thread.run(Thread.java:745)
> >>
> >> [email protected]
> >>
>

Reply via email to