Hi all,
I am getting an exception when trying to execute a Spark Job that is using
the new Phoenix 4.5 spark connector. The application works very well in my
local machine, but fails to run in a cluster environment on top of yarn.
The cluster is a Cloudera CDH 5.4.4 with HBase 1.0.0 and Phoenix
Hi,
I've read about the recent updates about spark-streaming integration with
Kafka (I refer to the new approach without receivers).
In the new approach, metadata are persisted in checkpoint folders on HDFS
so that the SparkStreaming context can be recreated in case of failures.
This means that