Hi Josh,
I am using CDH 5.5.2 with HBase 1.0.0, Phoenix 4.5.2, and Spark 1.6.0. I looked
up the error and found others who led me to ask the question. I’ll try to use
Phoenix 4.7.0 client jar and see what happens.
The error I am getting is:
java.sql.SQLException: ERROR 103 (08004): Unable to
I want to know if there is a update/patch coming to Spark or the Spark plugin?
I see that the Spark plugin does not work because HBase classes are missing
from the Spark Assembly jar. So, when Spark does reflection, it does not look
for HBase client classes in the Phoenix Plugin jar but only in
I want to know if there is a update/patch coming to Spark or the Spark plugin?
I see that the Spark plugin does not work because HBase classes are missing
from the Spark Assembly jar. So, when Spark does reflection, it does not look
for HBase client classes in the Phoenix Plugin jar but only in
Are the primary keys in the .csv file are all unique? (no rows overwriting
other rows)
On Fri, Apr 8, 2016 at 10:21 AM, Amit Shah wrote:
> Hi,
>
> I am using phoenix 4.6 and hbase 1.0. After bulk loading 10 mil records
> into a table using the psql.py utility, I tried
Hi,
I am using phoenix 4.6 and hbase 1.0. After bulk loading 10 mil records
into a table using the psql.py utility, I tried querying the table using
the sqlline.py utility through a select count(*) query. I see only 0.1
million records.
What could be missing?
The psql.py logs are
python
Hi Divya,
That's strange. Are you able to post a snippet of your code to look at? And
are you sure that you're saving the dataframes as per the docs (
https://phoenix.apache.org/phoenix_spark.html)?
Depending on your HDP version, it may or may not actually have
phoenix-spark support.