Re: spark plugin with java

2015-12-01 Thread Josh Mahonin
Hi Krishna, I've not tried it in Java at all, but I as of Spark 1.4+ the DataFrame API should be unified between Scala and Java, so the following may work for you: DataFrame df = sqlContext.read() .format("org.apache.phoenix.spark") .option("table", "TABLE1") .option("zkUrl", "")

spark plugin with java

2015-12-01 Thread Krishna
Hi, Is there a working example for using spark plugin in Java? Specifically, what's the java equivalent for creating a dataframe as shown here in scala: val df = sqlContext.phoenixTableAsDataFrame("TABLE1", Array("ID", "COL1"), conf = configuration)

Phoenix(+Query Server) with Namenode HA

2015-12-01 Thread YoungWoo Kim
Hi, I'm configuring HBase and Phoenix on HDFS HA. Should I have any specific configurations or env variables on Phoenix for Namenode HA? My development cluster works fine without Namenode HA but new cluster with NN HA, I can't connect Phoenix using sqlline or sqlline-thin. It looks like HBase and

Re: Problem with arrays in phoenix-spark

2015-12-01 Thread Dawid Wysakowicz
Sure, I have done that. https://issues.apache.org/jira/browse/PHOENIX-2469 2015-11-30 22:22 GMT+01:00 Josh Mahonin : > Hi David, > > Thanks for the bug report and the proposed patch. Please file a JIRA and > we'll take the discussion there. > > Josh > > On Mon, Nov 30, 2015 at 1:01 PM, Dawid Wys