Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/30#discussion_r11654985
  
    --- Diff: docs/python-programming-guide.md ---
    @@ -63,6 +63,11 @@ All of PySpark's library dependencies, including 
[Py4J](http://py4j.sourceforge.
     Standalone PySpark applications should be run using the `bin/pyspark` 
script, which automatically configures the Java and Python environment using 
the settings in `conf/spark-env.sh` or `.cmd`.
     The script automatically adds the `bin/pyspark` package to the 
`PYTHONPATH`.
     
    +# Running PySpark on YARN
    +
    +Running PySpark on a YARN-managed cluster requires a few extra steps. The 
client must reference a ZIP file containing PySpark and its dependencies. To 
create this file, run "make" inside the `python/` directory in the Spark 
source. This will generate `pyspark-assembly.zip`  under `python/build/`. Then, 
set the PYSPARK_ZIP environment variable to point to the location of this file. 
Lastly, set MASTER=yarn-client.
    --- End diff --
    
    If you make the proposed changes this could be simplified to just saying 
that you can run it in yarn-client mode.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to