Any idea?

2015-07-01 9:50 GMT+08:00 Louis Hust <louis.h...@gmail.com>:

> So the cdh5.2.0 is patched with HBASE-11678 ?
>
> 2015-07-01 6:43 GMT+08:00 Stack <st...@duboce.net>:
>
>> I checked Vladimir and 5.2.0 is the first release with the
>> necessary HBASE-11678 "BucketCache ramCache fills heap after running a few
>> hours".
>>
>> FYI,
>> Thanks,
>> St.Ack
>>
>> On Tue, Jun 30, 2015 at 3:03 PM, Vladimir Rodionov <
>> vladrodio...@gmail.com>
>> wrote:
>>
>> > I believe CDH 5.2.0 does not contain all BucketCache critical patches,
>> but
>> > I may be wrong.
>> >
>> > -Vlad
>> >
>> > On Tue, Jun 30, 2015 at 12:25 AM, Louis Hust <louis.h...@gmail.com>
>> wrote:
>> >
>> > > <property>
>> > > <name>hbase.bucketcache.size</name>
>> > > <value>800000</value>
>> > > <source>hbase-site.xml</source>
>> > > </property>
>> > > <property>
>> > > <name>hbase.bucketcache.ioengine</name>
>> > > <value>file:/export/hbase/cache.data</value>
>> > > <source>hbase-site.xml</source>
>> > > </property>
>> > > <property>
>> > > <name>hbase.bucketcache.combinedcache.enabled</name>
>> > > <value>false</value>
>> > > <source>hbase-site.xml</source>
>> > > </property>
>> > >
>> > > 2015-06-30 12:22 GMT+08:00 Ted Yu <yuzhih...@gmail.com>:
>> > >
>> > > > How do you configure BucketCache ?
>> > > >
>> > > > Thanks
>> > > >
>> > > > On Mon, Jun 29, 2015 at 8:35 PM, Louis Hust <louis.h...@gmail.com>
>> > > wrote:
>> > > >
>> > > > > BTW, the hbase is hbase0.98.6 CHD5.2.0
>> > > > >
>> > > > > 2015-06-30 11:31 GMT+08:00 Louis Hust <louis.h...@gmail.com>:
>> > > > >
>> > > > > > Hi, all
>> > > > > >
>> > > > > > When I scan a table using hbase shell, got the following
>> message:
>> > > > > >
>> > > > > > hbase(main):001:0> scan 'atpco:ttf_record6'
>> > > > > > ROW                                              COLUMN+CELL
>> > > > > >
>> > > > > > ERROR:
>> > > > org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
>> > > > > > Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
>> > > > > > request=scanner_id: 201542113 number_of_rows: 100 close_scanner:
>> > > false
>> > > > > > next_call_seq: 0
>> > > > > > at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3193)
>> > > > > > at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
>> > > > > > at
>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>> > > > > > at
>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>> > > > > > at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>> > > > > > at
>> > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>> > > > > > at java.lang.Thread.run(Thread.java:744)
>> > > > > >
>> > > > > >
>> > > > > > *And the region server got the following error:*
>> > > > > >
>> > > > > > 2015-06-30 11:08:11,877 ERROR
>> > > > > > [B.defaultRpcServer.handler=27,queue=0,port=60020]
>> ipc.RpcServer:
>> > > > > > Unexpected throwable object
>> > > > > > java.lang.IllegalArgumentException: Negative position
>> > > > > >         at
>> > sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:675)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:87)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:406)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.io.hfile.LruBlockCache.getBlock(LruBlockCache.java:389)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:635)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:749)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:136)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:507)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3900)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3980)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3858)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3849)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
>> > > > > >         at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
>> > > > > >         at
>> > > > > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>> > > > > >         at
>> > > > > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>

Reply via email to