spark 2.0.2 connect phoenix query server error

2016-11-21 Thread Dequn Zhang
Hello, since spark 2.x can not use Phoenix Spark Interpreter to load data, so I want to use JDBC, but when I want to get a *thin connection*, I got the following Error info while using *direct connection is ok* , I ran it in spark-shell, scala 2.11.8, so can anyone give a solution? Phoenix : 4.

Spark 2.0.2 thin conn to Phoenix 4.8.1 error

2016-11-21 Thread Dequn Zhang
Hello, since spark 2.x can not use Phoenix Spark Interpreter to load data, so I want to use JDBC, but when I want to get a *thin connection*, I got the following Error info while using *direct connection is ok* , I ran it in spark-shell, scala 2.11.8, so can anyone give a solution? Phoenix : 4.

Re: spark 2.0.2 connect phoenix query server error

2016-11-23 Thread Dequn Zhang
t; file to see if there is more there. > > Dequn Zhang wrote: > >> Hello, since spark 2.x can not use Phoenix Spark Interpreter to load >> data, so I want to use JDBC, but when I want to get a *thin connection*, >> I got the following Error info while using *direct connec

Re: Apache Spark Plugin dosn't support spark2.0

2016-12-04 Thread Dequn Zhang
Spark changed DataFrame definition from 2.x , it’s not compatible with 1.x and Phoenix Spark Interpreter was developed with Spark 1.x API, so , until now, U can use Phoenix JDBC instead in Spark 2.0. You can loot at this JIRA. https://issues.apache.org/jira/browse/PHOENIX- On 5 December 2016

Re: phoenix load time

2016-12-06 Thread Dequn Zhang
The *Warning* has nothing to your time, it’s just because hadoop was built under 32bit environment, and maybe your env is 64bit. The time consuming has a relation to your network speed, or whether your cluster has enougth computing sources, and last, *if index exists on your querying field*, you ca

Re: FW: Failing on writing Dataframe to Phoenix

2017-02-15 Thread Dequn Zhang
Hello, I used Phoenix–4.9-HBase–1.2, the pom file is like org.apache.phoenix phoenix-client ${phoenix.version} and the save() method is unaccessible in Phoenix–4.9, so use the following code instead, while “BIGJOY.TARJS” is my destination table , imoDF.sparkSession.sparkContext.had

Re: Still having issues

2017-02-16 Thread Dequn Zhang
Please check whether your table is created by Phoenix( means this table is not a *Mapping* ) , you can follow the sample on phoenix official site, only need *change the version to the latest*, use *phoenix-client* instead, and promise *Schema Corresponding*. Create a new table to test, use simple d

Re:

2017-02-16 Thread Dequn Zhang
Sorry for my tone, I didn’t mean that. I use Phoenix4.9 on HBase1.2, and according to your err info, I looked up HTableDescriptor definition and found this method public HTableDescriptor setValue(String key, String value) { if(value == null) { this.remove(key); } e