[ 
https://issues.apache.org/jira/browse/SPARK-2420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14056738#comment-14056738
 ] 

Sean Owen commented on SPARK-2420:
----------------------------------

Heh yes +1 to upgrading Guava in Hadoop. I don't know why it hasn't been. It's 
not just a Hive thing. Right now I don't think Spark actually uses something 
that is in Guava 12+ only, or else we would have seen the error on Hadoop 
already. Which is why I say it's a bit un-ideal to compile vs 14. But which is 
also why downgrading it for purposes of another project probably works fine. 
Even if Hadoop updated in 2.5.0 it wouldn't help until Spark dropped support 
for everything earlier. And I suppose a case study for why shading really 
common libraries would be a great idea.

> Change Spark build to minimize library conflicts
> ------------------------------------------------
>
>                 Key: SPARK-2420
>                 URL: https://issues.apache.org/jira/browse/SPARK-2420
>             Project: Spark
>          Issue Type: Wish
>          Components: Build
>    Affects Versions: 1.0.0
>            Reporter: Xuefu Zhang
>         Attachments: spark_1.0.0.patch
>
>
> During the prototyping of HIVE-7292, many library conflicts showed up because 
> Spark build contains versions of libraries that's vastly different from 
> current major Hadoop version. It would be nice if we can choose versions 
> that's in line with Hadoop or shading them in the assembly. Here are the wish 
> list:
> 1. Upgrade protobuf version to 2.5.0 from current 2.4.1
> 2. Shading Spark's jetty and servlet dependency in the assembly.
> 3. guava version difference. Spark is using a higher version. I'm not sure 
> what's the best solution for this.
> The list may grow as HIVE-7292 proceeds.
> For information only, the attached is a patch that we applied on Spark in 
> order to make Spark work with Hive. It gives an idea of the scope of changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to