Repository: spark
Updated Branches:
  refs/heads/master 51e2b38d9 -> 278984d5a


[SPARK-25019][BUILD] Fix orc dependency to use the same exclusion rules

## What changes were proposed in this pull request?

During upgrading Apache ORC to 1.5.2 
([SPARK-24576](https://issues.apache.org/jira/browse/SPARK-24576)), `sql/core` 
module overrides the exclusion rules of parent pom file and it causes published 
`spark-sql_2.1X` artifacts have incomplete exclusion rules 
([SPARK-25019](https://issues.apache.org/jira/browse/SPARK-25019)). This PR 
fixes it by moving the newly added exclusion rule to the parent pom. This also 
fixes the sbt build hack introduced at that time.

## How was this patch tested?

Pass the existing dependency check and the tests.

Author: Dongjoon Hyun <dongj...@apache.org>

Closes #22003 from dongjoon-hyun/SPARK-25019.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/278984d5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/278984d5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/278984d5

Branch: refs/heads/master
Commit: 278984d5a5e56136c9f940f2d0e3d2040fad180b
Parents: 51e2b38
Author: Dongjoon Hyun <dongj...@apache.org>
Authored: Mon Aug 6 12:00:39 2018 -0700
Committer: Yin Huai <yh...@databricks.com>
Committed: Mon Aug 6 12:00:39 2018 -0700

----------------------------------------------------------------------
 pom.xml          |  4 ++++
 sql/core/pom.xml | 28 ----------------------------
 2 files changed, 4 insertions(+), 28 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/278984d5/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index c46eb31..8abdb70 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1744,6 +1744,10 @@
             <artifactId>hadoop-common</artifactId>
           </exclusion>
           <exclusion>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-hdfs</artifactId>
+          </exclusion>
+          <exclusion>
             <groupId>org.apache.hive</groupId>
             <artifactId>hive-storage-api</artifactId>
           </exclusion>

http://git-wip-us.apache.org/repos/asf/spark/blob/278984d5/sql/core/pom.xml
----------------------------------------------------------------------
diff --git a/sql/core/pom.xml b/sql/core/pom.xml
index 68b42a4..ba17f5f 100644
--- a/sql/core/pom.xml
+++ b/sql/core/pom.xml
@@ -90,39 +90,11 @@
       <groupId>org.apache.orc</groupId>
       <artifactId>orc-core</artifactId>
       <classifier>${orc.classifier}</classifier>
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-hdfs</artifactId>
-        </exclusion>
-        <!--
-          orc-core:nohive doesn't have this dependency, but we adds this to 
prevent
-          sbt from getting confused.
-        -->
-        <exclusion>
-          <groupId>org.apache.hive</groupId>
-          <artifactId>hive-storage-api</artifactId>
-        </exclusion>
-      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.orc</groupId>
       <artifactId>orc-mapreduce</artifactId>
       <classifier>${orc.classifier}</classifier>
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-hdfs</artifactId>
-        </exclusion>
-        <!--
-          orc-core:nohive doesn't have this dependency, but we adds this to 
prevent
-          sbt from getting confused.
-        -->
-        <exclusion>
-          <groupId>org.apache.hive</groupId>
-          <artifactId>hive-storage-api</artifactId>
-        </exclusion>
-      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.parquet</groupId>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to