Github user tgravescs commented on the issue: https://github.com/apache/spark/pull/16695 this seems really really specific to the scripts being in the hadoop conf directory and the user using default mapping. I assume the hadoop confs on the nodemanagers have a different config then the gateways? I know this can happen but if this is the case shouldn't it just be using the hadoop conf dir on the cluster vs us copying it from the gateway? this is kind of the opposite problem you were trying to solve with SPARK-2669. yes and there can be a whole mix of these but to me that just seems like bad cluster setup. You should either be expecting to use gateways side configs or nodemanager side configs not a mix of both. What if I have the paths different but they aren't on in the hadoop conf dir? I assume it breaks then also and this doesn't handle it. I'm hesitant to start doing this as it opens it up for us doing this for any/all hadoop configs. Users can also just override by setting spark.hadoop.net.topology.script.file.name
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org