Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/5294#issuecomment-88247299
  
    I don't know that it's controversial. As in all things, it's a question of 
how much of a problem it solves for how many users versus how much burden it 
puts on other users or current and future maintainers. I agree there's not a 
lot of complexity here besides yet another config parameter (albeit, OK, 
undocumented), so I was asking about how much problem it solves and when.
    
    So, you package Hadoop A with Spark, which is compatible-enough with Hadoop 
B deployed on your cluster that you can run Spark jobs using Hadoop A on this 
cluster. But this is to defend against Hadoop C being deployed under you, which 
can't coexist with your Spark, but this Spark + Hadoop A combo still executes 
correctly on the Hadoop C cluster? Is that something that realistically happens?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to