Re: Scan got exception

2015-06-29 Thread Ted Yu
How do you configure BucketCache ?

Thanks

On Mon, Jun 29, 2015 at 8:35 PM, Louis Hust  wrote:

> BTW, the hbase is hbase0.98.6 CHD5.2.0
>
> 2015-06-30 11:31 GMT+08:00 Louis Hust :
>
> > Hi, all
> >
> > When I scan a table using hbase shell, got the following message:
> >
> > hbase(main):001:0> scan 'atpco:ttf_record6'
> > ROW  COLUMN+CELL
> >
> > ERROR: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> > Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
> > request=scanner_id: 201542113 number_of_rows: 100 close_scanner: false
> > next_call_seq: 0
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3193)
> > at
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> > at java.lang.Thread.run(Thread.java:744)
> >
> >
> > *And the region server got the following error:*
> >
> > 2015-06-30 11:08:11,877 ERROR
> > [B.defaultRpcServer.handler=27,queue=0,port=60020] ipc.RpcServer:
> > Unexpected throwable object
> > java.lang.IllegalArgumentException: Negative position
> > at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:675)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:87)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:406)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.getBlock(LruBlockCache.java:389)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:635)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:749)
> > at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:136)
> > at
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
> > at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:507)
> > at
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3900)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3980)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3858)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3849)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
> > at
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
> > at
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> > at
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> >
>


Re: Scan got exception

2015-06-29 Thread Louis Hust
BTW, the hbase is hbase0.98.6 CHD5.2.0

2015-06-30 11:31 GMT+08:00 Louis Hust :

> Hi, all
>
> When I scan a table using hbase shell, got the following message:
>
> hbase(main):001:0> scan 'atpco:ttf_record6'
> ROW  COLUMN+CELL
>
> ERROR: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
> request=scanner_id: 201542113 number_of_rows: 100 close_scanner: false
> next_call_seq: 0
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3193)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:744)
>
>
> *And the region server got the following error:*
>
> 2015-06-30 11:08:11,877 ERROR
> [B.defaultRpcServer.handler=27,queue=0,port=60020] ipc.RpcServer:
> Unexpected throwable object
> java.lang.IllegalArgumentException: Negative position
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:675)
> at
> org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:87)
> at
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:406)
> at
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.getBlock(LruBlockCache.java:389)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:635)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:749)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:136)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:507)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3900)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3980)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3858)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3849)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>


Scan got exception

2015-06-29 Thread Louis Hust
Hi, all

When I scan a table using hbase shell, got the following message:

hbase(main):001:0> scan 'atpco:ttf_record6'
ROW  COLUMN+CELL

ERROR: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
request=scanner_id: 201542113 number_of_rows: 100 close_scanner: false
next_call_seq: 0
at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3193)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:744)


*And the region server got the following error:*

2015-06-30 11:08:11,877 ERROR
[B.defaultRpcServer.handler=27,queue=0,port=60020] ipc.RpcServer:
Unexpected throwable object
java.lang.IllegalArgumentException: Negative position
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:675)
at
org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:87)
at
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:406)
at
org.apache.hadoop.hbase.io.hfile.LruBlockCache.getBlock(LruBlockCache.java:389)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:359)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:635)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:749)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:136)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:507)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3900)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3980)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3858)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3849)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)


Re: RegionServer 60030 Show All RPC Handler Task is empty

2015-06-29 Thread Louis Hust
Anybody can help?

2015-06-26 23:05 GMT+08:00 Louis Hust :

> Hi, all
>
>
> We are using 0.98.6-cdh5.2.0, rUnknown
>
>
> And when we visit the web regionserverhost:60030 to see Show All RPC
> Handler Tasks,
>
> found No tasks currently running on this node. But we configure
>
> 
>
> hbase.regionserver.handler.count
>
> 150
>
> hbase-site.xml
>
> 
>
>
>
> And on another cluster using 0.96.0-hadoop2, we can see following tasks
> under Show All RPC Handler Tasks:
>
>
>   Tue Jun 09 17:32:14 CST 2015
>
> RpcServer.handler=9,port=60020
>
> WAITING (since 0sec ago)
>
> Waiting for a call (since 0sec ago)
>
>
> So i want to know if it is a bug? or something i misunderstand?
>
>
> Any idea will be appreciated!
>


Re: ports of exteranl zookeeper ensemble

2015-06-29 Thread Ted Yu
Looks like the hbase release you use doesn't have HBASE-12706 which is in
hbase 1.1.0

FYI

On Mon, Jun 29, 2015 at 2:40 AM, 俞忠静  wrote:

> Hi dear all,
>
> I have an existing zookeeper ensemble, which is
> kafka01:2181,kafka02:2182,kafka03:2183,data04:2184,data05:2185   (port is
> different)
> and export HBASE_MANAGES_ZK=false  in hbase-env.sh,
> and in hbase-site.xml
> 
> hbase.zookeeper.quorum
>
> kafka01:2181,kafka02:2182,kafka03:2183,data04:2184,data05:2185
> 
>
>
> but after I started the hbase cluster, I found the log :
> 2015-06-25 11:02:43,910 INFO  [main] zookeeper.RecoverableZooKeeper:
> Process identifier=master:16020 connecting to ZooKeeper
> ensemble=data04:2181,kafka03:2181,kafka02:2181,kafka01:2181,data05:2181
>
> Why are the ports turned to be 2181?
>


[RESULT][VOTE] First release candidate for HBase 1.1.1 (RC0) is available

2015-06-29 Thread Nick Dimiduk
The vote has passed with 3 binding +1's. J-M's concerns regarding test
stability are well noted; hopefully we'll be more impressive in future
releases.

Thanks for testing the release.
Nick

On Tue, Jun 23, 2015 at 4:25 PM, Nick Dimiduk  wrote:

> I'm happy to announce the first release candidate of HBase 1.1.1
> (HBase-1.1.1RC0) is available for download at
> https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.1RC0/
>
> Maven artifacts are also available in the staging repository
> https://repository.apache.org/content/repositories/orgapachehbase-1087/
>
> Artifacts are signed with my code signing subkey 0xAD9039071C3489BD,
> available in the Apache keys directory
> https://people.apache.org/keys/committer/ndimiduk.asc
>
> There's also a signed tag for this release at
> https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=af1934d826cab80f727e9a95c5b564f04da73259
>
> HBase 1.1.1 is the first patch release in the HBase 1.1 line, continuing
> on the theme of bringing a stable, reliable database to the Hadoop and
> NoSQL communities. This release includes over 100 bug fixes since the 1.1.0
> release, including an assignment manager bug that can lead to data loss in
> rare cases. Users of 1.1.0 are strongly encouraged to update to 1.1.1 as
> soon as possible.
>
> The full list of issues can be found at
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12332169
>
> Please try out this candidate and vote +/-1 by midnight Pacific time on
> Sunday, 2015-06-28 as to whether we should release these artifacts as HBase
> 1.1.1.
>
> Thanks,
> Nick
>


ports of exteranl zookeeper ensemble

2015-06-29 Thread 俞忠静
Hi dear all,

I have an existing zookeeper ensemble, which is 
kafka01:2181,kafka02:2182,kafka03:2183,data04:2184,data05:2185   (port is 
different)
and export HBASE_MANAGES_ZK=false  in hbase-env.sh,
and in hbase-site.xml

hbase.zookeeper.quorum

kafka01:2181,kafka02:2182,kafka03:2183,data04:2184,data05:2185



but after I started the hbase cluster, I found the log :
2015-06-25 11:02:43,910 INFO  [main] zookeeper.RecoverableZooKeeper: Process 
identifier=master:16020 connecting to ZooKeeper 
ensemble=data04:2181,kafka03:2181,kafka02:2181,kafka01:2181,data05:2181

Why are the ports turned to be 2181?


Re: 2 bucket caches?

2015-06-29 Thread Nick Dimiduk
Hi J-M,

N-leveled caching is something I've discussed with some folks but it hasn't
been done. We already have multi-cache management strategies, such as
CombinedBlockCache, so this would be making them more generic and exposing
through configuration. Something you'd be interested in taking on?

Dunno if you noticed, but we now support caching blocks out into memcached
as well.

-n

On Sun, Jun 28, 2015 at 11:20 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:

> Hi,
>
> Is it possible to have 2 bucket cache on a single region server?
>
> Like L2 and L3? I would like to have L2 offheap and block evicted from L2
> going into L3 on SSD. So we already have something like that? Or should I
> open a JIRA?
>
> hbase.bucketcache.ioengine can get only one value. Might be nice to have a
> flume-like approach...
>
> hbase.bucketcache=myoffheap,myssddrive
> hbase.bucketcache.myoffheap.ioengine=offheap
> hbase.bucketcache.myssddrive.ioengine=file://my_ssd_mnt/there
>
> And keep the order specified in hbase.bucketcache, so myoffheap=L2,
> myssddrive=L3, etc.?
>
> Thanks,
>
> JM
>


Re: 2 bucket caches?

2015-06-29 Thread Jean-Marc Spaggiari
Hi Michael,

All what you said is exactly what I have in mind. Being able to have a
layered hierarchy with different storage engines ((Flash, SSD, Memory,
etc.) that you can configure. It's not just offheap, but anything which can
offload the BlockCache and which is still faster than going to the spinning
drives.

JM

2015-06-29 10:43 GMT-04:00 Michael Segel :

> I think you may want to think a bit about this…
>
> How far do you want to go with your memory management?
>
> 'Off heap' is a new nifty way of saying application level swap and memory
> management.  So what you are basically saying is that I have memory, local
> persistence, then HDFS persistence.
> And your local persistence could be anything… (PCIe based flash,
> UltraDIMMs, RRAM (when it hits the market), SSDs, even raided spinning
> rust… )
>
> If you’re going in that direction, what is tachyon doing?
>
> If you want to do this… and I’m not saying its a bad idea, you’ll want to
> think a bit more generic. Essentially its a layered hiearchy (memory, p1,
> p2, …) where p(n) is a pool of devices which have a set of rules on how to
> propagate pages up or down the hierarchy.
>
>
>
>
>
> > On Jun 29, 2015, at 1:20 AM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
> >
> > Hi,
> >
> > Is it possible to have 2 bucket cache on a single region server?
> >
> > Like L2 and L3? I would like to have L2 offheap and block evicted from L2
> > going into L3 on SSD. So we already have something like that? Or should I
> > open a JIRA?
> >
> > hbase.bucketcache.ioengine can get only one value. Might be nice to have
> a
> > flume-like approach...
> >
> > hbase.bucketcache=myoffheap,myssddrive
> > hbase.bucketcache.myoffheap.ioengine=offheap
> > hbase.bucketcache.myssddrive.ioengine=file://my_ssd_mnt/there
> >
> > And keep the order specified in hbase.bucketcache, so myoffheap=L2,
> > myssddrive=L3, etc.?
> >
> > Thanks,
> >
> > JM
>
> The opinions expressed here are mine, while they may reflect a cognitive
> thought, that is purely accidental.
> Use at your own risk.
> Michael Segel
> michael_segel (AT) hotmail.com
>
>
>
>
>
>


Re: 2 bucket caches?

2015-06-29 Thread Michael Segel
I think you may want to think a bit about this… 

How far do you want to go with your memory management? 

'Off heap' is a new nifty way of saying application level swap and memory 
management.  So what you are basically saying is that I have memory, local 
persistence, then HDFS persistence. 
And your local persistence could be anything… (PCIe based flash, UltraDIMMs, 
RRAM (when it hits the market), SSDs, even raided spinning rust… )

If you’re going in that direction, what is tachyon doing? 

If you want to do this… and I’m not saying its a bad idea, you’ll want to think 
a bit more generic. Essentially its a layered hiearchy (memory, p1, p2, …) 
where p(n) is a pool of devices which have a set of rules on how to propagate 
pages up or down the hierarchy. 





> On Jun 29, 2015, at 1:20 AM, Jean-Marc Spaggiari  
> wrote:
> 
> Hi,
> 
> Is it possible to have 2 bucket cache on a single region server?
> 
> Like L2 and L3? I would like to have L2 offheap and block evicted from L2
> going into L3 on SSD. So we already have something like that? Or should I
> open a JIRA?
> 
> hbase.bucketcache.ioengine can get only one value. Might be nice to have a
> flume-like approach...
> 
> hbase.bucketcache=myoffheap,myssddrive
> hbase.bucketcache.myoffheap.ioengine=offheap
> hbase.bucketcache.myssddrive.ioengine=file://my_ssd_mnt/there
> 
> And keep the order specified in hbase.bucketcache, so myoffheap=L2,
> myssddrive=L3, etc.?
> 
> Thanks,
> 
> JM

The opinions expressed here are mine, while they may reflect a cognitive 
thought, that is purely accidental. 
Use at your own risk. 
Michael Segel
michael_segel (AT) hotmail.com