[ 
https://issues.apache.org/jira/browse/SPARK-1698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13987698#comment-13987698
 ] 

Sean Owen commented on SPARK-1698:
----------------------------------

What is the suggested change in this particular JIRA? I saw the PR, which seems 
to replace the shade with assembly plugin. Given the reference to 
https://issues.scala-lang.org/browse/SI-6660 are you suggesting that your 
assembly change packages differently, by putting jars in jars? Yes, the issue 
you link to is exactly the kind of problem that can occur with this approach. 
It comes up a bit in Hadoop as well. Even though it is in theory a fine way to 
do things. But is that what you're getting at?

> Improve spark integration
> -------------------------
>
>                 Key: SPARK-1698
>                 URL: https://issues.apache.org/jira/browse/SPARK-1698
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build, Deploy
>            Reporter: Guoqiang Li
>            Assignee: Guoqiang Li
>             Fix For: 1.0.0
>
>
> Use the shade plugin to create a big JAR with all the dependencies can cause 
> a few problems
> 1. Missing jar's meta information
> 2. Some file is covered, eg: plugin.xml
> 3. Different versions of the jar may co-exist
> 4. Too big, java 6 does not support



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to