reference link: http://phoenix.apache.org/installation.html


----------------------------------------
   Yun Zhang
   Best regards!


2018-08-07 9:30 GMT+08:00 倪项菲 <nixiangfei_...@chinamobile.com>:

> Hi Zhang Yun,
>     how to deploy the Phoenix server?I just have the infomation from
> phoenix website,it doesn't mention the phoenix server
>
>
>
>
> 发件人: Jaanai Zhang <cloud.pos...@gmail.com>
> 时间: 2018/08/07(星期二)09:16
> 收件人: user <user@phoenix.apache.org>;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase
> 1.2.6
>
> Please ensure your Phoenix server was deployed and had resarted
>
>
> ----------------------------------------
>    Yun Zhang
>    Best regards!
>
>
> 2018-08-07 9:10 GMT+08:00 倪项菲 <nixiangfei_...@chinamobile.com>:
>
>>
>> Hi Experts,
>>     I am using HBase 1.2.6,the cluster is working good with HMaster
>> HA,but when we integrate phoenix with hbase,it failed,below are the steps
>>     1,download apache-phoenix-4.14.0-HBase-1.2-bin from
>> http://phoenix.apache.org,the copy the tar file to the HMaster and unzip
>> the file
>>     2,copy phoenix-core-4.14.0-HBase-1.2.jar 
>> phoenix-4.14.0-HBase-1.2-server.jar
>> to all HBase nodes including HMaster and HRegionServer ,put them to
>> hbasehome/lib,my path is /opt/hbase-1.2.6/lib
>>     3,restart hbase cluster
>>     4,then start to use phoenix,but it return below error:
>>       [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
>> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-e
>> cloud01-bigdata-zk03
>> Setting property: [incremental, false]
>> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
>> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none
>> org.apache.phoenix.jdbc.PhoenixDriver
>> Connecting to jdbc:phoenix:plat-ecloud01-bigdata-zk01,
>> plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/opt/apache-phoenix-
>> 4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/or
>> g/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.6/sh
>> are/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/im
>> pl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load
>> configured region split policy 
>> 'org.apache.phoenix.schema.MetaDataSplitPolicy'
>> for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
>> or table descriptor if you want to bypass sanity checks
>>         at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionF
>> orFailure(HMaster.java:1754)
>>         at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescr
>> iptor(HMaster.java:1615)
>>         at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.
>> java:1541)
>>         at org.apache.hadoop.hbase.master.MasterRpcServices.createTable
>> (MasterRpcServices.java:463)
>>         at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$
>> MasterService$2.callBlockingMethod(MasterProtos.java:55682)
>>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:21
>> 96)
>>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:
>> 112)
>>         at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExec
>> utor.java:133)
>>         at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.ja
>> va:108)
>>         at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured
>> region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for
>> table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or
>> table descriptor if you want to bypass sanity checks
>>         at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionF
>> orFailure(HMaster.java:1754)
>>         at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescr
>> iptor(HMaster.java:1615)
>>         at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.
>> java:1541)
>>         at org.apache.hadoop.hbase.master.MasterRpcServices.createTable
>> (MasterRpcServices.java:463)
>>         at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$
>> MasterService$2.callBlockingMethod(MasterProtos.java:55682)
>>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:21
>> 96)
>>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:
>> 112)
>>         at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExec
>> utor.java:133)
>>         at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.ja
>> va:108)
>>         at java.lang.Thread.run(Thread.java:745)
>>
>>         at org.apache.phoenix.util.ServerUtil.parseServerException(Serv
>> erUtil.java:144)
>>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureT
>> ableCreated(ConnectionQueryServicesImpl.java:1197)
>>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.createT
>> able(ConnectionQueryServicesImpl.java:1491)
>>         at org.apache.phoenix.schema.MetaDataClient.createTableInternal
>> (MetaDataClient.java:2717)
>>         at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDat
>> aClient.java:1114)
>>         at org.apache.phoenix.compile.CreateTableCompiler$1.execute(Cre
>> ateTableCompiler.java:192)
>>         at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixState
>> ment.java:408)
>>         at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixState
>> ment.java:391)
>>         at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>>         at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(Pho
>> enixStatement.java:389)
>>         at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(Pho
>> enixStatement.java:378)
>>         at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(Phoen
>> ixStatement.java:1806)
>>         at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.
>> call(ConnectionQueryServicesImpl.java:2528)
>>         at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.
>> call(ConnectionQueryServicesImpl.java:2491)
>>         at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixC
>> ontextExecutor.java:76)
>>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(Co
>> nnectionQueryServicesImpl.java:2491)
>>         at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServ
>> ices(PhoenixDriver.java:255)
>>         at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnecti
>> on(PhoenixEmbeddedDriver.java:150)
>>         at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.
>> java:221)
>>         at sqlline.DatabaseConnection.connect(DatabaseConnection.java:
>> 157)
>>         at sqlline.DatabaseConnection.getConnection(DatabaseConnection.
>> java:203)
>>         at sqlline.Commands.connect(Commands.java:1064)
>>         at sqlline.Commands.connect(Commands.java:996)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>         at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
>> ndler.java:38)
>>         at sqlline.SqlLine.dispatch(SqlLine.java:809)
>>         at sqlline.SqlLine.initArgs(SqlLine.java:588)
>>         at sqlline.SqlLine.begin(SqlLine.java:661)
>>         at sqlline.SqlLine.start(SqlLine.java:398)
>>         at sqlline.SqlLine.main(SqlLine.java:291)
>> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured
>> region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for
>> table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or
>> table descriptor if you want to bypass sanity checks
>>
>>       I searched from internet,but got no help.
>>       Any help will be highly appreciated!
>>
>>
>

Reply via email to