Re: May I run hbase on top of Alluxio/tacyon
See Install Snappy Support under: http://hbase.apache.org/book.html#compressor.install FYI On Tue, Jul 5, 2016 at 9:51 PM, kevinwrote: > 0: jdbc:phoenix:master> select count(1) from STORE_SALES; > +--+ > | COUNT(1) | > +--+ > java.lang.RuntimeException: > org.apache.phoenix.exception.PhoenixIOException: > org.apache.phoenix.exception.PhoenixIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: > STORE_SALES,,1467706628930.ca35b82bd80c92d0d501c73956ef836f.: null > at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84) > at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52) > at > > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:205) > at > > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1340) > at > > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1656) > at > > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1733) > at > > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1695) > at > > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1335) > at > > org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3250) > at > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31068) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2147) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:105) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.ArrayIndexOutOfBoundsException > at > > org.apache.hadoop.io.compress.snappy.SnappyDecompressor.setInput(SnappyDecompressor.java:111) > at > > org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:104) > at > > org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) > at > > org.apache.hadoop.hbase.io.compress.Compression.decompress(Compression.java:426) > at > > org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext.prepareDecoding(HFileBlockDefaultDecodingContext.java:91) > at org.apache.hadoop.hbase.io.hfile.HFileBlock.unpack(HFileBlock.java:508) > at > > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:398) > at > > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253) > at > > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:540) > at > > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:588) > at > > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:287) > at > > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:201) > at > > org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55) > at > > org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:316) > at > > org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:260) > at > > org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:740) > at > > org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:715) > at > > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:540) > at > > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:142) > at > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4205) > at > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4288) > at > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4162) > at > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4149) > at > > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:284) > at > > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178) > ... 12 more > > at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73) > at sqlline.TableOutputFormat.print(TableOutputFormat.java:33) > at
Re: May I run hbase on top of Alluxio/tacyon
0: jdbc:phoenix:master> select count(1) from STORE_SALES; +--+ | COUNT(1) | +--+ java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: STORE_SALES,,1467706628930.ca35b82bd80c92d0d501c73956ef836f.: null at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84) at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52) at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:205) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1340) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1656) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1733) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1695) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1335) at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3250) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31068) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2147) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:105) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ArrayIndexOutOfBoundsException at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.setInput(SnappyDecompressor.java:111) at org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:104) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85) at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) at org.apache.hadoop.hbase.io.compress.Compression.decompress(Compression.java:426) at org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext.prepareDecoding(HFileBlockDefaultDecodingContext.java:91) at org.apache.hadoop.hbase.io.hfile.HFileBlock.unpack(HFileBlock.java:508) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:398) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:540) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:588) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:287) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:201) at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:316) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:260) at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:740) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:715) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:540) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:142) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4205) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4288) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4162) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4149) at org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:284) at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178) ... 12 more at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73) at sqlline.TableOutputFormat.print(TableOutputFormat.java:33) at sqlline.SqlLine.print(SqlLine.java:1653) at sqlline.Commands.execute(Commands.java:833) at sqlline.Commands.sql(Commands.java:732) at sqlline.SqlLine.dispatch(SqlLine.java:808) at sqlline.SqlLine.begin(SqlLine.java:681) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) 2016-06-21 9:15 GMT+08:00 kevin: > I have worked out this question : >
Re: May I run hbase on top of Alluxio/tacyon
I have worked out this question : https://alluxio.atlassian.net/browse/ALLUXIO-2025 2016-06-20 21:02 GMT+08:00 Jean-Marc Spaggiari: > I think you might want to clean everything and retry. Clean the ZK /hbase > content as well as your fs /hbase folder and restart... > > 2016-06-20 3:22 GMT-04:00 kevin : > > > *I got some error:* > > > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:java.library.path=/home/dcos/hadoop-2.7.1/lib/native > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:java.io.tmpdir=/tmp > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:java.compiler= > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:os.name=Linux > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:os.arch=amd64 > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:os.version=3.10.0-327.el7.x86_64 > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:user.name=root > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:user.home=/root > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > > environment:user.dir=/home/dcos/hbase-0.98.16.1-hadoop2 > > 2016-06-20 14:50:45,454 INFO [main] zookeeper.ZooKeeper: Initiating > client > > connection, connectString=slave1:2181,master:2181,slave2:2181 > > sessionTimeout=9 > > watcher=master:60x0, quorum=slave1:2181,master:2181,slave2:2181, > > baseZNode=/hbase > > 2016-06-20 14:50:45,490 INFO [main-SendThread(slave2:2181)] > > zookeeper.ClientCnxn: Opening socket connection to server slave2/ > > 10.1.3.177:2181. Will not attempt > > to authenticate using SASL (unknown error) > > 2016-06-20 14:50:45,498 INFO [main-SendThread(slave2:2181)] > > zookeeper.ClientCnxn: Socket connection established to slave2/ > > 10.1.3.177:2181, initiating session > > 2016-06-20 14:50:45,537 INFO [main-SendThread(slave2:2181)] > > zookeeper.ClientCnxn: Session establishment complete on server slave2/ > > 10.1.3.177:2181, sessionid = > > 0x3556c8a93960004, negotiated timeout = 4 > > 2016-06-20 14:50:46,040 INFO [RpcServer.responder] ipc.RpcServer: > > RpcServer.responder: starting > > 2016-06-20 14:50:46,043 INFO [RpcServer.listener,port=6] > > ipc.RpcServer: RpcServer.listener,port=6: starting > > 2016-06-20 14:50:46,137 INFO [master:master:6] mortbay.log: Logging > to > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > > org.mortbay.log.Slf4jLog > > 2016-06-20 14:50:46,177 INFO [master:master:6] http.HttpServer: > Added > > global filter 'safety' > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) > > 2016-06-20 14:50:46,180 INFO [master:master:6] http.HttpServer: > Added > > filter static_user_filter > > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat > > icUserFilter) to context master > > 2016-06-20 14:50:46,180 INFO [master:master:6] http.HttpServer: > Added > > filter static_user_filter > > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat > > icUserFilter) to context static > > 2016-06-20 14:50:46,189 INFO [master:master:6] http.HttpServer: > Jetty > > bound to port 60010 > > 2016-06-20 14:50:46,189 INFO [master:master:6] mortbay.log: > > jetty-6.1.26 > > 2016-06-20 14:50:46,652 INFO [master:master:6] mortbay.log: Started > > HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 > > 2016-06-20 14:50:47,122 INFO [master:master:6] > > master.ActiveMasterManager: Registered Active > > Master=master,6,1466405444533 > > 2016-06-20 14:50:47,123 DEBUG [main-EventThread] > > master.ActiveMasterManager: A master is now available > > 2016-06-20 14:50:47,127 INFO [master:master:6] logger.type: > > getWorkingDirectory: / > > 2016-06-20 14:50:47,128 INFO [master:master:6] > > Configuration.deprecation: fs.default.name is deprecated. Instead, use > > fs.defaultFS > > 2016-06-20 14:50:47,131 INFO [master:master:6] logger.type: > > getFileStatus(alluxio://master:19998/hbase) > > 2016-06-20 14:50:47,153 INFO [master:master:6] logger.type: Alluxio > > client (version 1.1.1-SNAPSHOT) is trying to connect with > > FileSystemMasterClient maste > > r @ master/10.1.3.181:19998 > > 2016-06-20 14:50:47,159 INFO [master:master:6] logger.type: Client > > registered with FileSystemMasterClient master @ master/10.1.3.181:19998 > > 2016-06-20 14:50:47,209 INFO [master:master:6] logger.type: > > mkdirs(alluxio://master:19998/hbase, rwxrwxrwx) > > 2016-06-20 14:50:47,227 INFO [master:master:6] logger.type: > > create(alluxio://master:19998/hbase/.tmp/hbase.version, rw-r--r--, true, > > 131072, 1, 536870912, > > null) > > 2016-06-20 14:50:47,262 INFO [master:master:6] logger.type: Alluxio > > client (version 1.1.1-SNAPSHOT) is trying to connect
Re: May I run hbase on top of Alluxio/tacyon
I think you might want to clean everything and retry. Clean the ZK /hbase content as well as your fs /hbase folder and restart... 2016-06-20 3:22 GMT-04:00 kevin: > *I got some error:* > > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:java.library.path=/home/dcos/hadoop-2.7.1/lib/native > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:java.io.tmpdir=/tmp > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:java.compiler= > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:os.name=Linux > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:os.arch=amd64 > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:os.version=3.10.0-327.el7.x86_64 > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:user.name=root > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:user.home=/root > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client > environment:user.dir=/home/dcos/hbase-0.98.16.1-hadoop2 > 2016-06-20 14:50:45,454 INFO [main] zookeeper.ZooKeeper: Initiating client > connection, connectString=slave1:2181,master:2181,slave2:2181 > sessionTimeout=9 > watcher=master:60x0, quorum=slave1:2181,master:2181,slave2:2181, > baseZNode=/hbase > 2016-06-20 14:50:45,490 INFO [main-SendThread(slave2:2181)] > zookeeper.ClientCnxn: Opening socket connection to server slave2/ > 10.1.3.177:2181. Will not attempt > to authenticate using SASL (unknown error) > 2016-06-20 14:50:45,498 INFO [main-SendThread(slave2:2181)] > zookeeper.ClientCnxn: Socket connection established to slave2/ > 10.1.3.177:2181, initiating session > 2016-06-20 14:50:45,537 INFO [main-SendThread(slave2:2181)] > zookeeper.ClientCnxn: Session establishment complete on server slave2/ > 10.1.3.177:2181, sessionid = > 0x3556c8a93960004, negotiated timeout = 4 > 2016-06-20 14:50:46,040 INFO [RpcServer.responder] ipc.RpcServer: > RpcServer.responder: starting > 2016-06-20 14:50:46,043 INFO [RpcServer.listener,port=6] > ipc.RpcServer: RpcServer.listener,port=6: starting > 2016-06-20 14:50:46,137 INFO [master:master:6] mortbay.log: Logging to > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > org.mortbay.log.Slf4jLog > 2016-06-20 14:50:46,177 INFO [master:master:6] http.HttpServer: Added > global filter 'safety' > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) > 2016-06-20 14:50:46,180 INFO [master:master:6] http.HttpServer: Added > filter static_user_filter > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat > icUserFilter) to context master > 2016-06-20 14:50:46,180 INFO [master:master:6] http.HttpServer: Added > filter static_user_filter > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat > icUserFilter) to context static > 2016-06-20 14:50:46,189 INFO [master:master:6] http.HttpServer: Jetty > bound to port 60010 > 2016-06-20 14:50:46,189 INFO [master:master:6] mortbay.log: > jetty-6.1.26 > 2016-06-20 14:50:46,652 INFO [master:master:6] mortbay.log: Started > HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 > 2016-06-20 14:50:47,122 INFO [master:master:6] > master.ActiveMasterManager: Registered Active > Master=master,6,1466405444533 > 2016-06-20 14:50:47,123 DEBUG [main-EventThread] > master.ActiveMasterManager: A master is now available > 2016-06-20 14:50:47,127 INFO [master:master:6] logger.type: > getWorkingDirectory: / > 2016-06-20 14:50:47,128 INFO [master:master:6] > Configuration.deprecation: fs.default.name is deprecated. Instead, use > fs.defaultFS > 2016-06-20 14:50:47,131 INFO [master:master:6] logger.type: > getFileStatus(alluxio://master:19998/hbase) > 2016-06-20 14:50:47,153 INFO [master:master:6] logger.type: Alluxio > client (version 1.1.1-SNAPSHOT) is trying to connect with > FileSystemMasterClient maste > r @ master/10.1.3.181:19998 > 2016-06-20 14:50:47,159 INFO [master:master:6] logger.type: Client > registered with FileSystemMasterClient master @ master/10.1.3.181:19998 > 2016-06-20 14:50:47,209 INFO [master:master:6] logger.type: > mkdirs(alluxio://master:19998/hbase, rwxrwxrwx) > 2016-06-20 14:50:47,227 INFO [master:master:6] logger.type: > create(alluxio://master:19998/hbase/.tmp/hbase.version, rw-r--r--, true, > 131072, 1, 536870912, > null) > 2016-06-20 14:50:47,262 INFO [master:master:6] logger.type: Alluxio > client (version 1.1.1-SNAPSHOT) is trying to connect with BlockMasterClient > master @ m > aster/10.1.3.181:19998 > 2016-06-20 14:50:47,263 INFO [master:master:6] logger.type: Client > registered with BlockMasterClient master @ master/10.1.3.181:19998 > 2016-06-20 14:50:47,369 INFO [master:master:6] logger.type: Connecting > to remote worker @ slave1/10.1.3.176:29998 > 2016-06-20 14:50:47,411
Re: May I run hbase on top of Alluxio/tacyon
*I got some error:* 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/dcos/hadoop-2.7.1/lib/native 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler= 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-327.el7.x86_64 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=root 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/root 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/dcos/hbase-0.98.16.1-hadoop2 2016-06-20 14:50:45,454 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=slave1:2181,master:2181,slave2:2181 sessionTimeout=9 watcher=master:60x0, quorum=slave1:2181,master:2181,slave2:2181, baseZNode=/hbase 2016-06-20 14:50:45,490 INFO [main-SendThread(slave2:2181)] zookeeper.ClientCnxn: Opening socket connection to server slave2/ 10.1.3.177:2181. Will not attempt to authenticate using SASL (unknown error) 2016-06-20 14:50:45,498 INFO [main-SendThread(slave2:2181)] zookeeper.ClientCnxn: Socket connection established to slave2/ 10.1.3.177:2181, initiating session 2016-06-20 14:50:45,537 INFO [main-SendThread(slave2:2181)] zookeeper.ClientCnxn: Session establishment complete on server slave2/ 10.1.3.177:2181, sessionid = 0x3556c8a93960004, negotiated timeout = 4 2016-06-20 14:50:46,040 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting 2016-06-20 14:50:46,043 INFO [RpcServer.listener,port=6] ipc.RpcServer: RpcServer.listener,port=6: starting 2016-06-20 14:50:46,137 INFO [master:master:6] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-06-20 14:50:46,177 INFO [master:master:6] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2016-06-20 14:50:46,180 INFO [master:master:6] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat icUserFilter) to context master 2016-06-20 14:50:46,180 INFO [master:master:6] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat icUserFilter) to context static 2016-06-20 14:50:46,189 INFO [master:master:6] http.HttpServer: Jetty bound to port 60010 2016-06-20 14:50:46,189 INFO [master:master:6] mortbay.log: jetty-6.1.26 2016-06-20 14:50:46,652 INFO [master:master:6] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 2016-06-20 14:50:47,122 INFO [master:master:6] master.ActiveMasterManager: Registered Active Master=master,6,1466405444533 2016-06-20 14:50:47,123 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available 2016-06-20 14:50:47,127 INFO [master:master:6] logger.type: getWorkingDirectory: / 2016-06-20 14:50:47,128 INFO [master:master:6] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 2016-06-20 14:50:47,131 INFO [master:master:6] logger.type: getFileStatus(alluxio://master:19998/hbase) 2016-06-20 14:50:47,153 INFO [master:master:6] logger.type: Alluxio client (version 1.1.1-SNAPSHOT) is trying to connect with FileSystemMasterClient maste r @ master/10.1.3.181:19998 2016-06-20 14:50:47,159 INFO [master:master:6] logger.type: Client registered with FileSystemMasterClient master @ master/10.1.3.181:19998 2016-06-20 14:50:47,209 INFO [master:master:6] logger.type: mkdirs(alluxio://master:19998/hbase, rwxrwxrwx) 2016-06-20 14:50:47,227 INFO [master:master:6] logger.type: create(alluxio://master:19998/hbase/.tmp/hbase.version, rw-r--r--, true, 131072, 1, 536870912, null) 2016-06-20 14:50:47,262 INFO [master:master:6] logger.type: Alluxio client (version 1.1.1-SNAPSHOT) is trying to connect with BlockMasterClient master @ m aster/10.1.3.181:19998 2016-06-20 14:50:47,263 INFO [master:master:6] logger.type: Client registered with BlockMasterClient master @ master/10.1.3.181:19998 2016-06-20 14:50:47,369 INFO [master:master:6] logger.type: Connecting to remote worker @ slave1/10.1.3.176:29998 2016-06-20 14:50:47,411 INFO [master:master:6] logger.type: Connected to remote machine slave1/10.1.3.176:2 2016-06-20 14:50:47,589 INFO [master:master:6] logger.type: status: SUCCESS from remote machine slave1/10.1.3.176:2 received 2016-06-20 14:50:47,612 INFO [master:master:6] logger.type: rename(alluxio://master:19998/hbase/.tmp/hbase.version,
Re: May I run hbase on top of Alluxio/tacyon
I want to test if run on alluxio could improve performance,because alluxio is a distribution filesystem top on memory and under filesystem could be hdfs or s3 or something. 2016-06-16 10:32 GMT+08:00 Ted Yu: > Since you already have hadoop 2.7.1, why is alluxio 1.1.0 needed ? > > Can you illustrate your use case ? > > Thanks > > On Wed, Jun 15, 2016 at 7:27 PM, kevin wrote: > > > hi,all: > > > > I wonder to know If run hbase on Alluxio/tacyon is possible and a good > > idea, and can anybody share the experience.,thanks. > > I will try hbase0.98.16 with hadoop2.7.1 on top of alluxio 1.1.0. > > >
Re: May I run hbase on top of Alluxio/tacyon
Since you already have hadoop 2.7.1, why is alluxio 1.1.0 needed ? Can you illustrate your use case ? Thanks On Wed, Jun 15, 2016 at 7:27 PM, kevinwrote: > hi,all: > > I wonder to know If run hbase on Alluxio/tacyon is possible and a good > idea, and can anybody share the experience.,thanks. > I will try hbase0.98.16 with hadoop2.7.1 on top of alluxio 1.1.0. >
May I run hbase on top of Alluxio/tacyon
hi,all: I wonder to know If run hbase on Alluxio/tacyon is possible and a good idea, and can anybody share the experience.,thanks. I will try hbase0.98.16 with hadoop2.7.1 on top of alluxio 1.1.0.