That's because Spark have to read HADOOP_CONF_DIR and have to compress jars
from SPARK_HOME. If we can handle it by ourself, we don't have to add it
into Spark.
On Wed, May 10, 2017 at 5:19 AM, Jeff Zhang wrote:
> >>> I have implemented to run interpreters in yarn cluster and
>>> I have implemented to run interpreters in yarn cluster and succeed to
>>> launches
SparkInterpreter with local mode in yarn cluster.
Do you mean you can run spark intepreter in yarn cluster mode with
SPARK_HOME set ?
IIUC, running spark interpreter in yarn-client or yarn-cluster mode require
I have implemented to run interpreters in yarn cluster and succeed to
launches SparkInterpreter with local mode in yarn cluster. BTW, I've tested
it with yarn-client but it needs to set SPARK_HOME.
I'm not sure if it was possible before, but I want to do it with embedded
spark.
Thanks for
yarn-client mode doesn't work for embeded spark. But does it work before ?
I think embeded spark should only work with local mode
Jongyoul Lee 于2017年5月9日周二 上午10:02写道:
> Hi devs,
>
> I need you help to verify some mode of SparkInterpreter. I saw the message
> below from
Hi devs,
I need you help to verify some mode of SparkInterpreter. I saw the message
below from spark website in yarn mode:
```
To make Spark runtime jars accessible from YARN side, you can specify
spark.yarn.archive or spark.yarn.jars. For details please refer to Spark
Properties