Github user vanzin commented on the issue: https://github.com/apache/spark/pull/15159 I wonder if there isn't a better way to handle this without having to add more configs. e.g., just distribute things if the user asks for it. For example, if `--py-files` is provided, distribute those files and set env variables as if the application were a pyspark app, even if it isn't. In YARN's `Client.scala`, distribute the python libs with the app when creating the jars archive - it's a tiny extra overhead given the size of the other jars. And pretty much deprecate `PYSPARK_ARCHIVES_PATH` - users can add those to `spark.yarn.jars` now and achieve the same effect. Similar things for R, although I'm not really familiar with that path. I might be overseeing something, but I think it would be nice to avoid adding more config options if possible.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org