[
https://issues.apache.org/jira/browse/OOZIE-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16731948#comment-16731948
]
Hadoop QA commented on OOZIE-3404:
----------------------------------
PreCommit-OOZIE-Build started
> The env variable of SPARK_HOME needs to be set when running pySpark
> -------------------------------------------------------------------
>
> Key: OOZIE-3404
> URL: https://issues.apache.org/jira/browse/OOZIE-3404
> Project: Oozie
> Issue Type: Bug
> Affects Versions: 5.1.0
> Reporter: Junfan Zhang
> Assignee: Junfan Zhang
> Priority: Major
> Attachments: oozie-3404-1.patch
>
>
> When we run spark in a cluster, we rely on the spark jars on hdfs. We don't
> deploy Spark on the cluster server. So running pySpark according to the Oozie
> documentation is not successful.
>
> I found that when Hadoop is a 2.0+ version, although Oozie sets the
> {{SPARK_HOME}} variable in {{mapred.child.env}} , the {{mapreduce.map.env}}
> variable is read first in Hadoop ([source
> code|https://github.com/apache/hadoop/blob/f95b390df2ca7d599f0ad82cf6e8d980469e7abb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java#L45])
> . So when we don't set {{SPARK_HOME}} env in {{mapreduce.map.env}} , pySpark
> doesn't work.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)