Github user sun-rui commented on the pull request: https://github.com/apache/spark/pull/6743#issuecomment-116710169 Add support for shipping SparkR package for R workers required by RDD APIs. Tested createDataFrame() by creating a DataFrame from an R list. Remove sparkRLibDir parameter of sparkR.init(). Determine SparkR package location on each worker node according to the deployment mode (this allows node-specific SPARK_HOME). Not sure if there is better solution. A rough code scan about pySpark does not tell me how pySpark locates pySpark.zip in various deployment modes. @davies , could you help to give me a hint and review this patch? Next, I'd like to refactor this code to align with SPARK-5479 (moves YARN specific code from SparkSubmit to deploy/yarn) @shivaram, do you think I refactor code in this patch or do it in a new JIRA?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org