Hi There,
We're currently using HDP 2.3.4, Spark 1.5.2 with a Spark Streaming job in YARN
Cluster mode consuming from a high volume Kafka topic. When we try to access
the Spark Streaming UI on the application master, it is unresponsive/hangs or
sometimes comes back with connection refused.
It
Hi,
I was wondering if it was possible to submit a java system property to the JVM
that does the submission of a yarn-cluster application, for instance,
-Dlog4j.configuration. I believe it will default to using the SPARK_CONF_DIR's
log4j.properties, is it possible to override this, as I do not
Yeah we ran into this issue. Key part is to have the hbase jars and
hbase-site.xml config on the classpath of the spark submitter.
We did it slightly differently from Y Bodnar, where we set the required jars
and config on the env var SPARK_DIST_CLASSPATH in our spark env file (rather
than