[ 
https://issues.apache.org/jira/browse/SPARK-3819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14162195#comment-14162195
 ] 

Matt Cheah commented on SPARK-3819:
-----------------------------------

Can you elaborate as to why it is not feasible to build against multiple Hadoop 
versions? Is it simply because it is too slow?

I still strongly stand by the idea of making the need to test building against 
multiple versions explicit to the contributor. We need to minimize the risk of 
breaking the build.

> Jenkins should compile Spark against multiple versions of Hadoop
> ----------------------------------------------------------------
>
>                 Key: SPARK-3819
>                 URL: https://issues.apache.org/jira/browse/SPARK-3819
>             Project: Spark
>          Issue Type: Bug
>          Components: Build
>    Affects Versions: 1.1.0
>            Reporter: Matt Cheah
>            Priority: Minor
>              Labels: Jenkins
>             Fix For: 1.1.1
>
>
> The build broke because of PR 
> https://github.com/apache/spark/pull/2609#issuecomment-57962393 - however the 
> build failure was not caught by Jenkins. From what I understand the build 
> failure occurs when Spark is built manually against certain versions of 
> Hadoop.
> It seems intuitive that Jenkins should catch this sort of thing. The code 
> should be compiled against multiple Hadoop versions. It seems like overkill 
> to run the full test suite against all Hadoop versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to