[ 
https://issues.apache.org/jira/browse/SPARK-2420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14066600#comment-14066600
 ] 

Xuefu Zhang commented on SPARK-2420:
------------------------------------

{quote}
2. For jetty, it was a problem with Hive on Spark POC, possibly because we 
shipped all libraries from Hive process's classpath to the spark cluster. We 
have a task (HIVE-7371) to identify a minimum set of jars to be shipped. With 
that, the story might change. We will confirm if Jetty is a problem once we 
have a better idea on HIVE-7371. 
{quote}

A better good news. With the latest work in HIVE-7292, we found that 
servlet-api/jetty didn't seem to be a problem any more. Thus, the only conflict 
remaining is guava, for which HIVE-7387 has all the details.

> Change Spark build to minimize library conflicts
> ------------------------------------------------
>
>                 Key: SPARK-2420
>                 URL: https://issues.apache.org/jira/browse/SPARK-2420
>             Project: Spark
>          Issue Type: Wish
>          Components: Build
>    Affects Versions: 1.0.0
>            Reporter: Xuefu Zhang
>         Attachments: spark_1.0.0.patch
>
>
> During the prototyping of HIVE-7292, many library conflicts showed up because 
> Spark build contains versions of libraries that's vastly different from 
> current major Hadoop version. It would be nice if we can choose versions 
> that's in line with Hadoop or shading them in the assembly. Here are the wish 
> list:
> 1. Upgrade protobuf version to 2.5.0 from current 2.4.1
> 2. Shading Spark's jetty and servlet dependency in the assembly.
> 3. guava version difference. Spark is using a higher version. I'm not sure 
> what's the best solution for this.
> The list may grow as HIVE-7292 proceeds.
> For information only, the attached is a patch that we applied on Spark in 
> order to make Spark work with Hive. It gives an idea of the scope of changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to