[ https://issues.apache.org/jira/browse/HIVE-16391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501285#comment-16501285 ]
Saisai Shao commented on HIVE-16391: ------------------------------------ Hi [~joshrosen] I'm trying to make the hive changes as you mentioned above using the new classifier {{core-spark}}. I found one problem about release two shaded jars (one is hive-exec, another is hive-exec-core-spark). The published pom file is still reduced pom file, which is related to hive-exec, so when Spark using hive-exec-core-spark jar, it should explicitly declare all the transitive dependencies of hive-exec. I'm not sure if there's a way to publish two pom files mapping to two different shaded jars, or it is acceptable for Spark to explicitly declare all the transitive dependencies, like {{core}} classifier you used before? > Publish proper Hive 1.2 jars (without including all dependencies in uber jar) > ----------------------------------------------------------------------------- > > Key: HIVE-16391 > URL: https://issues.apache.org/jira/browse/HIVE-16391 > Project: Hive > Issue Type: Task > Components: Build Infrastructure > Reporter: Reynold Xin > Priority: Major > > Apache Spark currently depends on a forked version of Apache Hive. AFAIK, the > only change in the fork is to work around the issue that Hive publishes only > two sets of jars: one set with no dependency declared, and another with all > the dependencies included in the published uber jar. That is to say, Hive > doesn't publish a set of jars with the proper dependencies declared. > There is general consensus on both sides that we should remove the forked > Hive. > The change in the forked version is recorded here > https://github.com/JoshRosen/hive/tree/release-1.2.1-spark2 > Note that the fork in the past included other fixes but those have all become > unnecessary. -- This message was sent by Atlassian JIRA (v7.6.3#76005)