[ https://issues.apache.org/jira/browse/SPARK-3819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14161513#comment-14161513 ]
Patrick Wendell commented on SPARK-3819: ---------------------------------------- It's not feasible to run against multiple Hadoop versions for every pull request, but our nightly builds do run against 4 different Hadoop versions: https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-SBT/ https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-pre-YARN/ https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/ > Jenkins should compile Spark against multiple versions of Hadoop > ---------------------------------------------------------------- > > Key: SPARK-3819 > URL: https://issues.apache.org/jira/browse/SPARK-3819 > Project: Spark > Issue Type: Bug > Components: Build > Affects Versions: 1.1.0 > Reporter: Matt Cheah > Priority: Minor > Labels: Jenkins > Fix For: 1.1.1 > > > The build broke because of PR > https://github.com/apache/spark/pull/2609#issuecomment-57962393 - however the > build failure was not caught by Jenkins. From what I understand the build > failure occurs when Spark is built manually against certain versions of > Hadoop. > It seems intuitive that Jenkins should catch this sort of thing. The code > should be compiled against multiple Hadoop versions. It seems like overkill > to run the full test suite against all Hadoop versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org