[jira] [Commented] (PHOENIX-2130) Can't connct to hbase cluster

2016-02-08 Thread Sourabh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138440#comment-15138440
 ] 

Sourabh Jain commented on PHOENIX-2130:
---

You can set this property in hbase-site.xml and restart your region servers.

> Can't connct to hbase cluster
> -
>
> Key: PHOENIX-2130
> URL: https://issues.apache.org/jira/browse/PHOENIX-2130
> Project: Phoenix
>  Issue Type: Bug
> Environment: ubuntu 14.0
>Reporter: BerylLin
>
> I have a hadoop cluster which have 6 nodes, hadoop version is 2.2.0.
> Zookeeper cluster are installed in 
> datanode1,datanode2,datanode3,datanode4,datanode5.
> Hbase cluster is installed in that environment above, version is 0.98.13.
> Hbase can be started and used successfully.
> Phoenix version is 4.3.0(4.4.0 has also been tried)
> When I use "sqlline.py datanode1:2181", I got the error below:
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:datanode1:2181 none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:datanode1:2181
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> 15/07/18 20:55:39 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:870)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1194)
>   at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:111)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1682)
>   at 
> 

[jira] [Commented] (PHOENIX-2130) Can't connct to hbase cluster

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15113955#comment-15113955
 ] 

James Taylor commented on PHOENIX-2130:
---

Is this still an issue, [~Beryl]? If so, we'll need more info and a clear way 
to repro.

> Can't connct to hbase cluster
> -
>
> Key: PHOENIX-2130
> URL: https://issues.apache.org/jira/browse/PHOENIX-2130
> Project: Phoenix
>  Issue Type: Bug
> Environment: ubuntu 14.0
>Reporter: BerylLin
>
> I have a hadoop cluster which have 6 nodes, hadoop version is 2.2.0.
> Zookeeper cluster are installed in 
> datanode1,datanode2,datanode3,datanode4,datanode5.
> Hbase cluster is installed in that environment above, version is 0.98.13.
> Hbase can be started and used successfully.
> Phoenix version is 4.3.0(4.4.0 has also been tried)
> When I use "sqlline.py datanode1:2181", I got the error below:
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:datanode1:2181 none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:datanode1:2181
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> 15/07/18 20:55:39 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:870)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1194)
>   at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:111)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1682)
>   at 
> 

[jira] [Commented] (PHOENIX-2130) Can't connct to hbase cluster

2015-10-02 Thread Julian Rozentur (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14941705#comment-14941705
 ] 

Julian Rozentur commented on PHOENIX-2130:
--

Don't need a 6 node cluster to reproduce. Mine is pseudo-distributed hbase 
1.1.2. I get DoNotRetryIOException when trying to create a JDBC connection. 
Database has no user tables

> Can't connct to hbase cluster
> -
>
> Key: PHOENIX-2130
> URL: https://issues.apache.org/jira/browse/PHOENIX-2130
> Project: Phoenix
>  Issue Type: Bug
> Environment: ubuntu 14.0
>Reporter: BerylLin
>
> I have a hadoop cluster which have 6 nodes, hadoop version is 2.2.0.
> Zookeeper cluster are installed in 
> datanode1,datanode2,datanode3,datanode4,datanode5.
> Hbase cluster is installed in that environment above, version is 0.98.13.
> Hbase can be started and used successfully.
> Phoenix version is 4.3.0(4.4.0 has also been tried)
> When I use "sqlline.py datanode1:2181", I got the error below:
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:datanode1:2181 none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:datanode1:2181
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> 15/07/18 20:55:39 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:870)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1194)
>   at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:111)
>   at 
>