It might work using the `yarn-client` mode (
https://spark.incubator.apache.org/docs/latest/running-on-yarn.html#launch-spark-application-with-yarn-client-mode),
but I haven't tried it yet. PySpark has a dependency on Py4J, but only in
the driver JVM, so I don't know that we need to have anything in
SPARK_YARN_APP_JAR since all of the dependencies should be included in
SPARK_JAR anyways.

I've opened a JIRA ticket to track progress on Yarn support for PySpark,
whether it ends up simply requiring better documentation or new code:
https://spark-project.atlassian.net/browse/SPARK-1004


On Tue, Dec 17, 2013 at 1:13 AM, Xicheng Dong <[email protected]> wrote:

> hi, all
>    could spark python-program run on hadoop-yarn? I read
> running-on-yarn.html<http://spark.incubator.apache.org/docs/latest/running-on-yarn.html>
>  and 
> python-programming-guide.html<http://spark.incubator.apache.org/docs/latest/python-programming-guide.html>,
> but do not find any useful infomation.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Spark Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Spark Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to