Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/5786#issuecomment-101445511
  
    @vanzin what works when the build says `hadoop.version=1.0.4` that doesn't 
work when the build says `hadoop.version=2.2.0`? Just "running on Hadoop 1.x"? 
Agree but that is no longer supposed to work by default if the default Hadoop 
version is supposed to be 2.x. Whatever the problem is, is already a problem, 
since the Spark 1.3 POMs already have 2.2.0 specified.
    
    Anyway, maybe that's just violent agreement that something has to be 
tweaked. If this is merged as a resolution for 1.4, OK by me for sure.
    
    I don't like `activeByDefault` merely because it gets disabled if any 
profile is selected, not just a Hadoop-related profile.
    
    I think coaching in the docs to always set these Hadoop profiles is maybe 
safer and more overt. Then, the net change would be: everywhere in this PR that 
doesn't say `-Phadoop-x.y` should add `-Phadoop-2.2`, which is actually a no-op 
profile, but then at least it's explicit.
    
    Eventually when, say, Hadoop 1.x support really goes away, the `hadoop-1` 
profile really goes away and breaks command lines that select this profile, 
but, that's good.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to