Hi ,
Does phoenix-spark's saveToPhoenix use the JDBC driver internally, or
does it do something similar to CSVBulkLoader using HFiles?
Thanks!
Josh,
For my tests, I’m passing the Zookeeper Quorum URL.
"zkUrl" ->
"prod-nj3-hbase-master001.pnj3i.gradientx.com,prod-nj3-namenode001.pnj3i.gradientx.com,prod-nj3-namenode002.pnj3i.gradientx.com:2181”
Is this correct?
Thanks,
Ben
> On Apr 9, 2016, at 8:06 AM, Josh Mahonin
Hi Josh,
Thank you very much for your help.
I could see there is phoenix-spark-4.4.0.2.3.4.0-3485.jar in my
phoenix/lib.
Please confirm is the above jar you are talking about?
Thanks,
Divya
Josh Mahonin wrote:
> Hi Divya,
Hi Ben,
It looks like a connection URL issue. Are you passing the correct 'zkUrl'
parameter, or do you have the HBase Zookeeper quorum defined in an
hbase-site.xml available in the classpath?
If you're able to connect to Phoenix using JDBC, you should be able to take
the JDBC url, pop off the
Hi Divya,
You don't have the phoenix client-spark JAR in your classpath, which is
required for the phoenix-spark integration to work (as per the
documentation).
As well, you aren't using the vanilla Apache project that this mailing list
supports, but are using a vendor packaged platform
Hi,
The code which I using to connect to Phoenix for writing
def writeToTable(df: DataFrame,dbtable: String) = {
val phx_properties = collection.immutable.Map[String, String](
"zkUrl" -> "localhost:2181:/hbase-unsecure",
"table" -> dbtable)
Reposting for other user benefits
-- Forwarded message --
From: Divya Gehlot
Date: 8 April 2016 at 19:54
Subject: Re: [HELP:]Save Spark Dataframe in Phoenix Table
To: Josh Mahonin
Hi Josh,
I am doing in the same manner as mentioned