See Install Snappy Support under:

http://hbase.apache.org/book.html#compressor.install

FYI

On Tue, Jul 5, 2016 at 9:51 PM, kevin <kiss.kevin...@gmail.com> wrote:

> 0: jdbc:phoenix:master> select count(1) from STORE_SALES;
> +------------------------------------------+
> |                 COUNT(1)                 |
> +------------------------------------------+
> java.lang.RuntimeException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException:
> STORE_SALES,,1467706628930.ca35b82bd80c92d0d501c73956ef836f.: null
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
> at
>
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:205)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1340)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1656)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1733)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1695)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1335)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3250)
> at
>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31068)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2147)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:105)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException
> at
>
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.setInput(SnappyDecompressor.java:111)
> at
>
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:104)
> at
>
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
> at
>
> org.apache.hadoop.hbase.io.compress.Compression.decompress(Compression.java:426)
> at
>
> org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext.prepareDecoding(HFileBlockDefaultDecodingContext.java:91)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.unpack(HFileBlock.java:508)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:398)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:540)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:588)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:287)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:201)
> at
>
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:316)
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:260)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:740)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:715)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:540)
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:142)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4205)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4288)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4162)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4149)
> at
>
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:284)
> at
>
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> ... 12 more
>
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>
> 2016-06-21 9:15 GMT+08:00 kevin <kiss.kevin...@gmail.com>:
>
> > I have worked out this question :
> > https://alluxio.atlassian.net/browse/ALLUXIO-2025
> >
> > 2016-06-20 21:02 GMT+08:00 Jean-Marc Spaggiari <jean-m...@spaggiari.org
> >:
> >
> >> I think you might want to clean everything and retry. Clean the ZK
> /hbase
> >> content as well as your fs /hbase folder and restart...
> >>
> >> 2016-06-20 3:22 GMT-04:00 kevin <kiss.kevin...@gmail.com>:
> >>
> >> > *I got some error:*
> >> >
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:java.library.path=/home/dcos/hadoop-2.7.1/lib/native
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:java.io.tmpdir=/tmp
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:java.compiler=<NA>
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:os.name=Linux
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:os.arch=amd64
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:os.version=3.10.0-327.el7.x86_64
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:user.name=root
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:user.home=/root
> >> > 2016-06-20 14:50:45,453 INFO  [main] zookeeper.ZooKeeper: Client
> >> > environment:user.dir=/home/dcos/hbase-0.98.16.1-hadoop2
> >> > 2016-06-20 14:50:45,454 INFO  [main] zookeeper.ZooKeeper: Initiating
> >> client
> >> > connection, connectString=slave1:2181,master:2181,slave2:2181
> >> > sessionTimeout=90000
> >> > watcher=master:600000x0, quorum=slave1:2181,master:2181,slave2:2181,
> >> > baseZNode=/hbase
> >> > 2016-06-20 14:50:45,490 INFO  [main-SendThread(slave2:2181)]
> >> > zookeeper.ClientCnxn: Opening socket connection to server slave2/
> >> > 10.1.3.177:2181. Will not attempt
> >> >  to authenticate using SASL (unknown error)
> >> > 2016-06-20 14:50:45,498 INFO  [main-SendThread(slave2:2181)]
> >> > zookeeper.ClientCnxn: Socket connection established to slave2/
> >> > 10.1.3.177:2181, initiating session
> >> > 2016-06-20 14:50:45,537 INFO  [main-SendThread(slave2:2181)]
> >> > zookeeper.ClientCnxn: Session establishment complete on server slave2/
> >> > 10.1.3.177:2181, sessionid =
> >> >  0x3556c8a93960004, negotiated timeout = 40000
> >> > 2016-06-20 14:50:46,040 INFO  [RpcServer.responder] ipc.RpcServer:
> >> > RpcServer.responder: starting
> >> > 2016-06-20 14:50:46,043 INFO  [RpcServer.listener,port=60000]
> >> > ipc.RpcServer: RpcServer.listener,port=60000: starting
> >> > 2016-06-20 14:50:46,137 INFO  [master:master:60000] mortbay.log:
> >> Logging to
> >> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> >> > org.mortbay.log.Slf4jLog
> >> > 2016-06-20 14:50:46,177 INFO  [master:master:60000] http.HttpServer:
> >> Added
> >> > global filter 'safety'
> >> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> >> > 2016-06-20 14:50:46,180 INFO  [master:master:60000] http.HttpServer:
> >> Added
> >> > filter static_user_filter
> >> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat
> >> > icUserFilter) to context master
> >> > 2016-06-20 14:50:46,180 INFO  [master:master:60000] http.HttpServer:
> >> Added
> >> > filter static_user_filter
> >> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$Stat
> >> > icUserFilter) to context static
> >> > 2016-06-20 14:50:46,189 INFO  [master:master:60000] http.HttpServer:
> >> Jetty
> >> > bound to port 60010
> >> > 2016-06-20 14:50:46,189 INFO  [master:master:60000] mortbay.log:
> >> > jetty-6.1.26
> >> > 2016-06-20 14:50:46,652 INFO  [master:master:60000] mortbay.log:
> Started
> >> > HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010
> >> > 2016-06-20 14:50:47,122 INFO  [master:master:60000]
> >> > master.ActiveMasterManager: Registered Active
> >> > Master=master,60000,1466405444533
> >> > 2016-06-20 14:50:47,123 DEBUG [main-EventThread]
> >> > master.ActiveMasterManager: A master is now available
> >> > 2016-06-20 14:50:47,127 INFO  [master:master:60000] logger.type:
> >> > getWorkingDirectory: /
> >> > 2016-06-20 14:50:47,128 INFO  [master:master:60000]
> >> > Configuration.deprecation: fs.default.name is deprecated. Instead,
> use
> >> > fs.defaultFS
> >> > 2016-06-20 14:50:47,131 INFO  [master:master:60000] logger.type:
> >> > getFileStatus(alluxio://master:19998/hbase)
> >> > 2016-06-20 14:50:47,153 INFO  [master:master:60000] logger.type:
> Alluxio
> >> > client (version 1.1.1-SNAPSHOT) is trying to connect with
> >> > FileSystemMasterClient maste
> >> > r @ master/10.1.3.181:19998
> >> > 2016-06-20 14:50:47,159 INFO  [master:master:60000] logger.type:
> Client
> >> > registered with FileSystemMasterClient master @ master/
> 10.1.3.181:19998
> >> > 2016-06-20 14:50:47,209 INFO  [master:master:60000] logger.type:
> >> > mkdirs(alluxio://master:19998/hbase, rwxrwxrwx)
> >> > 2016-06-20 14:50:47,227 INFO  [master:master:60000] logger.type:
> >> > create(alluxio://master:19998/hbase/.tmp/hbase.version, rw-r--r--,
> true,
> >> > 131072, 1, 536870912,
> >> >  null)
> >> > 2016-06-20 14:50:47,262 INFO  [master:master:60000] logger.type:
> Alluxio
> >> > client (version 1.1.1-SNAPSHOT) is trying to connect with
> >> BlockMasterClient
> >> > master @ m
> >> > aster/10.1.3.181:19998
> >> > 2016-06-20 14:50:47,263 INFO  [master:master:60000] logger.type:
> Client
> >> > registered with BlockMasterClient master @ master/10.1.3.181:19998
> >> > 2016-06-20 14:50:47,369 INFO  [master:master:60000] logger.type:
> >> Connecting
> >> > to remote worker @ slave1/10.1.3.176:29998
> >> > 2016-06-20 14:50:47,411 INFO  [master:master:60000] logger.type:
> >> Connected
> >> > to remote machine slave1/10.1.3.176:29999
> >> > 2016-06-20 14:50:47,589 INFO  [master:master:60000] logger.type:
> status:
> >> > SUCCESS from remote machine slave1/10.1.3.176:29999 received
> >> > 2016-06-20 14:50:47,612 INFO  [master:master:60000] logger.type:
> >> > rename(alluxio://master:19998/hbase/.tmp/hbase.version,
> >> > alluxio://master:19998/hbase/hbase.ver
> >> > sion)
> >> > 2016-06-20 14:50:47,640 INFO  [master:master:60000] util.FSUtils:
> >> Created
> >> > version file at alluxio://master:19998/hbase with version=8
> >> > 2016-06-20 14:50:47,640 INFO  [master:master:60000] logger.type:
> >> > getFileStatus(alluxio://master:19998/hbase/hbase.id)
> >> > 2016-06-20 14:50:47,642 INFO  [master:master:60000] logger.type:
> >> > create(alluxio://master:19998/hbase/.tmp/hbase.id, rw-r--r--, true,
> >> > 131072,
> >> > 1, 536870912, null
> >> > )
> >> > 2016-06-20 14:50:47,648 INFO  [master:master:60000] logger.type:
> >> Connecting
> >> > to remote worker @ slave2/10.1.3.177:29998
> >> > 2016-06-20 14:50:47,650 INFO  [master:master:60000] logger.type:
> >> Connected
> >> > to remote machine slave2/10.1.3.177:29999
> >> > 2016-06-20 14:50:47,804 INFO  [master:master:60000] logger.type:
> status:
> >> > SUCCESS from remote machine slave2/10.1.3.177:29999 received
> >> > 2016-06-20 14:50:47,807 INFO  [master:master:60000] logger.type:
> >> > rename(alluxio://master:19998/hbase/.tmp/hbase.id,
> >> > alluxio://master:19998/hbase/hbase.id)
> >> > 2016-06-20 14:50:47,810 DEBUG [master:master:60000] util.FSUtils:
> >> Created
> >> > cluster ID file at alluxio://master:19998/hbase/hbase.id with ID:
> >> > 49f53428-bf3c-4ff3-
> >> > 80b8-179de195b5ed
> >> > 2016-06-20 14:50:47,810 INFO  [master:master:60000] logger.type:
> >> > getFileStatus(alluxio://master:19998/hbase/hbase.id)
> >> > 2016-06-20 14:50:47,811 INFO  [master:master:60000] logger.type:
> >> > getFileStatus(alluxio://master:19998/hbase/hbase.id)
> >> > 2016-06-20 14:50:47,813 INFO  [master:master:60000] logger.type:
> >> > open(alluxio://master:19998/hbase/hbase.id, 131072)
> >> > 2016-06-20 14:50:47,931 INFO  [master:master:60000] logger.type:
> >> Connecting
> >> > to remote worker @ slave2/10.1.3.177:29998
> >> > 2016-06-20 14:50:47,944 INFO  [master:master:60000] logger.type:
> >> Connected
> >> > to remote machine slave2/10.1.3.177:29999
> >> > 2016-06-20 14:50:47,947 INFO  [master:master:60000] logger.type: Data
> >> > 67545071616 from remote machine slave2/10.1.3.177:29999 received
> >> > 2016-06-20 14:50:47,973 INFO  [master:master:60000] logger.type:
> >> > getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740)
> >> > 2016-06-20 14:50:47,976 INFO  [master:master:60000]
> >> > master.MasterFileSystem: BOOTSTRAP: creating hbase:meta region
> >> > 2016-06-20 14:50:47,978 INFO  [master:master:60000] logger.type:
> >> > getWorkingDirectory: /
> >> > 2016-06-20 14:50:47,979 INFO  [master:master:60000] logger.type:
> >> > getWorkingDirectory: /
> >> > 2016-06-20 14:50:48,003 INFO  [master:master:60000]
> >> > Configuration.deprecation: hadoop.native.lib is deprecated. Instead,
> use
> >> > io.native.lib.available
> >> > 2016-06-20 14:50:48,124 INFO  [master:master:60000]
> >> regionserver.HRegion:
> >> > creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES =>
> >> > {IS_META => 'tr
> >> > ue', coprocessor$1 =>
> >> >
> >> >
> >>
> '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'},
> >> > {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER
> >> > => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS
> =>
> >> > '10', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS =>
> >> 'FALSE',
> >> > BLOCKSIZE =
> >> > > '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'} RootDir =
> >> > alluxio://master:19998/hbase Table name == hbase:meta
> >> > 2016-06-20 14:50:48,126 INFO  [master:master:60000] logger.type:
> >> > getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740)
> >> > 2016-06-20 14:50:48,129 INFO  [master:master:60000] logger.type:
> >> > mkdirs(alluxio://master:19998/hbase/data/hbase/meta/1588230740,
> >> rwxrwxrwx)
> >> > 2016-06-20 14:50:48,139 INFO  [master:master:60000] logger.type:
> >> >
> >>
> create(alluxio://master:19998/hbase/data/hbase/meta/1588230740/.regioninfo,
> >> > rw-rw-rw-, true, 1
> >> > 31072, 1, 536870912, null)
> >> > 2016-06-20 14:50:48,143 INFO  [master:master:60000] logger.type:
> >> Connecting
> >> > to remote worker @ slave1/10.1.3.176:29998
> >> > 2016-06-20 14:50:48,145 INFO  [master:master:60000] logger.type:
> >> Connected
> >> > to remote machine slave1/10.1.3.176:29999
> >> > 2016-06-20 14:50:48,148 INFO  [master:master:60000] logger.type:
> status:
> >> > SUCCESS from remote machine slave1/10.1.3.176:29999 received
> >> > 2016-06-20 14:50:48,158 INFO  [master:master:60000] wal.FSHLog:
> WAL/HLog
> >> > configuration: blocksize=512 MB, rollsize=486.40 MB, enabled=true
> >> > 2016-06-20 14:50:48,158 INFO  [master:master:60000] logger.type:
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs)
> >> > 2016-06-20 14:50:48,160 INFO  [master:master:60000] logger.type:
> >> > mkdirs(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs,
> >> > rwxrwxrwx)
> >> > 2016-06-20 14:50:48,161 INFO  [master:master:60000] logger.type:
> >> >
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/oldWALs)
> >> > 2016-06-20 14:50:48,163 INFO  [master:master:60000] logger.type:
> >> >
> mkdirs(alluxio://master:19998/hbase/data/hbase/meta/1588230740/oldWALs,
> >> > rwxrwxrwx)
> >> > 2016-06-20 14:50:48,163 INFO  [master:master:60000] logger.type:
> >> >
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs/hlog.1466405448163)
> >> > 2016-06-20 14:50:48,167 INFO  [master:master:60000] logger.type:
> >> >
> >> >
> >>
> create(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs/hlog.1466405448163,
> >> > rw-rw-
> >> > rw-, false, 131072, 1, 536870912, null)
> >> > 2016-06-20 14:50:48,171 INFO  [master:master:60000] logger.type:
> >> Connecting
> >> > to remote worker @ slave1/10.1.3.176:29998
> >> > 2016-06-20 14:50:48,180 INFO  [master:master:60000] wal.FSHLog: New
> WAL
> >> > /hbase/data/hbase/meta/1588230740/WALs/hlog.1466405448163
> >> > *2016-06-20 14:50:48,180 INFO  [master:master:60000] wal.FSHLog:
> >> > FileSystem's output stream doesn't support getNumCurrentReplicas;
> >> > --HDFS-826*
> >> > * not available; fsOut=alluxio.client.file.FileOutStream*
> >> > *2016-06-20 14:50:48,180 INFO  [master:master:60000] wal.FSHLog:
> >> > FileSystem's output stream doesn't support getPipeline; not available;
> >> > fsOut*
> >> > *=alluxio.client.file.FileOutStream*
> >> > 2016-06-20 14:50:48,194 DEBUG [master:master:60000]
> >> regionserver.HRegion:
> >> > Instantiated hbase:meta,,1.1588230740
> >> > 2016-06-20 14:50:48,194 INFO  [master:master:60000] logger.type:
> >> >
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/.regioninfo)
> >> > 2016-06-20 14:50:48,195 INFO  [master:master:60000] logger.type:
> >> > delete(alluxio://master:19998/hbase/data/hbase/meta/1588230740/.tmp,
> >> true)
> >> > 2016-06-20 14:50:48,202 INFO  [master:master:60000] logger.type:
> delete
> >> > failed: Path /hbase/data/hbase/meta/1588230740/.tmp does not exist
> >> > 2016-06-20 14:50:48,265 INFO  [StoreOpener-1588230740-1] logger.type:
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/info)
> >> > 2016-06-20 14:50:48,271 INFO  [StoreOpener-1588230740-1] logger.type:
> >> > mkdirs(alluxio://master:19998/hbase/data/hbase/meta/1588230740/info,
> >> > rwxrwxrwx)
> >> > 2016-06-20 14:50:48,286 INFO  [StoreOpener-1588230740-1]
> >> > compactions.CompactionConfiguration: size [134217728,
> >> 9223372036854775807);
> >> > files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point
> >> > 2684354560; major period 604800000, major jitter 0.500000, min loc
> >> > ality to compact 0.000000"
> >> > 2016-06-20 14:50:48,287 INFO  [StoreOpener-1588230740-1] logger.type:
> >> >
> listStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/info)
> >> > 2016-06-20 14:50:48,297 DEBUG [StoreOpener-1588230740-1]
> >> > regionserver.HRegionFileSystem: No StoreFiles for:
> >> > alluxio://master:19998/hbase/data/hbase/meta/1588230740/info
> >> > 2016-06-20 14:50:48,307 INFO  [StoreOpener-1588230740-1]
> >> util.ChecksumType:
> >> > Checksum using org.apache.hadoop.util.PureJavaCrc32
> >> > 2016-06-20 14:50:48,308 INFO  [StoreOpener-1588230740-1]
> >> util.ChecksumType:
> >> > Checksum can use org.apache.hadoop.util.PureJavaCrc32C
> >> > 2016-06-20 14:50:48,310 INFO  [master:master:60000] logger.type:
> >> >
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/recovered.edits)
> >> > 2016-06-20 14:50:48,313 DEBUG [master:master:60000]
> >> regionserver.HRegion:
> >> > Found 0 recovered edits file(s) under
> >> > alluxio://master:19998/hbase/data/hbase/meta/1588230740
> >> > 2016-06-20 14:50:48,313 INFO  [master:master:60000] logger.type:
> >> >
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/.splits)
> >> > 2016-06-20 14:50:48,314 INFO  [master:master:60000] logger.type:
> >> >
> delete(alluxio://master:19998/hbase/data/hbase/meta/1588230740/.merges,
> >> > true)
> >> > 2016-06-20 14:50:48,315 INFO  [master:master:60000] logger.type:
> delete
> >> > failed: Path /hbase/data/hbase/meta/1588230740/.merges does not exist
> >> > 2016-06-20 14:50:48,317 INFO  [master:master:60000]
> >> regionserver.HRegion:
> >> > Onlined 1588230740; next sequenceid=1
> >> > 2016-06-20 14:50:48,317 DEBUG [master:master:60000]
> >> regionserver.HRegion:
> >> > Closing hbase:meta,,1.1588230740: disabling compactions & flushes
> >> > 2016-06-20 14:50:48,317 DEBUG [master:master:60000]
> >> regionserver.HRegion:
> >> > Updates disabled for region hbase:meta,,1.1588230740
> >> > 2016-06-20 14:50:48,318 INFO
> >> >  [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore:
> >> Closed
> >> > info
> >> > 2016-06-20 14:50:48,318 INFO  [master:master:60000]
> >> regionserver.HRegion:
> >> > Closed hbase:meta,,1.1588230740
> >> > 2016-06-20 14:50:48,319 DEBUG [master:master:60000-WAL.AsyncNotifier]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncNotifier interrupted while
> >> waiting
> >> > for  notification from AsyncSyncer thread
> >> > 2016-06-20 14:50:48,319 INFO  [master:master:60000-WAL.AsyncNotifier]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncNotifier exiting
> >> > 2016-06-20 14:50:48,319 DEBUG [master:master:60000-WAL.AsyncSyncer0]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer0 interrupted while
> >> waiting
> >> > for notification from AsyncWriter thread
> >> > 2016-06-20 14:50:48,319 INFO  [master:master:60000-WAL.AsyncSyncer0]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer0 exiting
> >> > 2016-06-20 14:50:48,319 DEBUG [master:master:60000-WAL.AsyncSyncer1]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer1 interrupted while
> >> waiting
> >> > for notification from AsyncWriter thread
> >> > 2016-06-20 14:50:48,319 INFO  [master:master:60000-WAL.AsyncSyncer1]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer1 exiting
> >> > 2016-06-20 14:50:48,319 DEBUG [master:master:60000-WAL.AsyncSyncer2]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer2 interrupted while
> >> waiting
> >> > for notification from AsyncWriter thread
> >> > 2016-06-20 14:50:48,319 INFO  [master:master:60000-WAL.AsyncSyncer2]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer2 exiting
> >> > 2016-06-20 14:50:48,319 DEBUG [master:master:60000-WAL.AsyncSyncer3]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer3 interrupted while
> >> waiting
> >> > for notification from AsyncWriter thread
> >> > 2016-06-20 14:50:48,320 INFO  [master:master:60000-WAL.AsyncSyncer3]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer3 exiting
> >> > 2016-06-20 14:50:48,320 DEBUG [master:master:60000-WAL.AsyncSyncer4]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer4 interrupted while
> >> waiting
> >> > for notification from AsyncWriter thread
> >> > 2016-06-20 14:50:48,320 INFO  [master:master:60000-WAL.AsyncSyncer4]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncSyncer4 exiting
> >> > 2016-06-20 14:50:48,320 DEBUG [master:master:60000-WAL.AsyncWriter]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncWriter interrupted while
> >> waiting
> >> > for newer writes added to local buffer
> >> > 2016-06-20 14:50:48,320 INFO  [master:master:60000-WAL.AsyncWriter]
> >> > wal.FSHLog: master:master:60000-WAL.AsyncWriter exiting
> >> > 2016-06-20 14:50:48,320 DEBUG [master:master:60000] wal.FSHLog:
> Closing
> >> WAL
> >> > writer in alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs
> >> > 2016-06-20 14:50:48,322 INFO  [master:master:60000] logger.type:
> >> Connected
> >> > to remote machine slave1/10.1.3.176:29999
> >> > 2016-06-20 14:50:48,324 INFO  [master:master:60000] logger.type:
> status:
> >> > SUCCESS from remote machine slave1/10.1.3.176:29999 received
> >> > 2016-06-20 14:50:48,327 INFO  [master:master:60000] logger.type:
> >> >
> >>
> getFileStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs)
> >> > 2016-06-20 14:50:48,327 INFO  [master:master:60000] logger.type:
> >> >
> listStatus(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs)
> >> > 2016-06-20 14:50:48,328 INFO  [master:master:60000] logger.type:
> >> >
> >> >
> >>
> rename(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs/hlog.1466405448163,
> >> >
> >> >
> >>
> alluxio://master:19998/hbase/data/hbase/meta/1588230740/oldWALs/hlog.1466405448163)
> >> > 2016-06-20 14:50:48,331 DEBUG [master:master:60000] wal.FSHLog: Moved
> 1
> >> WAL
> >> > file(s) to /hbase/data/hbase/meta/1588230740/oldWALs
> >> > 2016-06-20 14:50:48,332 INFO  [master:master:60000] logger.type:
> >> > delete(alluxio://master:19998/hbase/data/hbase/meta/1588230740/WALs,
> >> true)
> >> > 2016-06-20 14:50:48,333 INFO  [master:master:60000] logger.type:
> >> > listStatus(alluxio://master:19998/hbase/data/hbase/meta/.tabledesc)
> >> > 2016-06-20 14:50:48,335 ERROR [master:master:60000] master.HMaster:
> >> > Unhandled exception. Starting shutdown.
> >> > java.io.IOException: alluxio.exception.FileDoesNotExistException: Path
> >> > /hbase/data/hbase/meta/.tabledesc does not exist
> >> > at
> >> >
> >>
> alluxio.hadoop.AbstractFileSystem.listStatus(AbstractFileSystem.java:462)
> >> > at alluxio.hadoop.FileSystem.listStatus(FileSystem.java:25)
> >> > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1515)
> >> > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1555)
> >> > at org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1659)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:369)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:350)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:331)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:726)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:707)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:694)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:516)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:154)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881)
> >> > at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684)
> >> > at java.lang.Thread.run(Thread.java:745)
> >> > Caused by: alluxio.exception.FileDoesNotExistException: Path
> >> > /hbase/data/hbase/meta/.tabledesc does not exist
> >> > at
> >> alluxio.client.file.BaseFileSystem.listStatus(BaseFileSystem.java:199)
> >> > at
> >> alluxio.client.file.BaseFileSystem.listStatus(BaseFileSystem.java:188)
> >> > at
> >> >
> >>
> alluxio.hadoop.AbstractFileSystem.listStatus(AbstractFileSystem.java:460)
> >> > ... 16 more
> >> > 2016-06-20 14:50:48,336 INFO  [master:master:60000] master.HMaster:
> >> > Aborting
> >> > 2016-06-20 14:50:48,336 DEBUG [master:master:60000] master.HMaster:
> >> > Stopping service threads
> >> > 2016-06-20 14:50:48,336 INFO  [master:master:60000] ipc.RpcServer:
> >> Stopping
> >> > server on 60000
> >> > 2016-06-20 14:50:48,336 INFO  [RpcServer.listener,port=60000]
> >> > ipc.RpcServer: RpcServer.listener,port=60000: stopping
> >> > 2016-06-20 14:50:48,336 INFO  [master:master:60000] master.HMaster:
> >> > Stopping infoServer
> >> > 2016-06-20 14:50:48,337 INFO  [RpcServer.responder] ipc.RpcServer:
> >> > RpcServer.responder: stopped
> >> > 2016-06-20 14:50:48,337 INFO  [RpcServer.responder] ipc.RpcServer:
> >> > RpcServer.responder: stopping
> >> > 2016-06-20 14:50:48,339 INFO  [master:master:60000] mortbay.log:
> Stopped
> >> > HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010
> >> > 2016-06-20 14:50:48,348 INFO  [master:master:60000]
> zookeeper.ZooKeeper:
> >> > Session: 0x3556c8a93960004 closed
> >> > 2016-06-20 14:50:48,349 INFO  [master:master:60000] master.HMaster:
> >> HMaster
> >> > main thread exiting
> >> > 2016-06-20 14:50:48,349 ERROR [main] master.HMasterCommandLine: Master
> >> > exiting
> >> > java.lang.RuntimeException: HMaster Aborted
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:201)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
> >> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >> > at
> >> >
> >> >
> >>
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> >> > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3062)
> >> >
> >> >
> >> >
> >> > can anybody tell me the root cause of this error? I think the info :
> >> > *"2016-06-20 14:50:48,180 INFO  [master:master:60000] wal.FSHLog:
> >> > FileSystem's output stream doesn't support getNumCurrentReplicas;
> >> > --HDFS-826*
> >> > * not available; fsOut=alluxio.client.file.FileOutStream*
> >> > *2016-06-20 14:50:48,180 INFO  [master:master:60000] wal.FSHLog:
> >> > FileSystem's output stream doesn't support getPipeline; not available;
> >> > fsOut*
> >> > *=alluxio.client.file.FileOutStream"*
> >> >
> >> > *is important?*
> >> >
> >> > 2016-06-16 11:31 GMT+08:00 kevin <kiss.kevin...@gmail.com>:
> >> >
> >> > > I want to test if run on alluxio could improve
> >> > > performance,because alluxio is a distribution filesystem top on
> memory
> >> > and
> >> > >  under filesystem could be hdfs or s3 or something.
> >> > >
> >> > >
> >> > > 2016-06-16 10:32 GMT+08:00 Ted Yu <yuzhih...@gmail.com>:
> >> > >
> >> > >> Since you already have hadoop 2.7.1, why is alluxio 1.1.0 needed ?
> >> > >>
> >> > >> Can you illustrate your use case ?
> >> > >>
> >> > >> Thanks
> >> > >>
> >> > >> On Wed, Jun 15, 2016 at 7:27 PM, kevin <kiss.kevin...@gmail.com>
> >> wrote:
> >> > >>
> >> > >> > hi,all:
> >> > >> >
> >> > >> > I wonder to know If run hbase on Alluxio/tacyon is possible and a
> >> good
> >> > >> > idea, and can anybody share the experience.,thanks.
> >> > >> > I will try hbase0.98.16 with hadoop2.7.1 on top of alluxio 1.1.0.
> >> > >> >
> >> > >>
> >> > >
> >> > >
> >> >
> >>
> >
> >
>

Reply via email to