marton-bod commented on a change in pull request #1478:
URL: https://github.com/apache/iceberg/pull/1478#discussion_r493815788



##########
File path: settings.gradle
##########
@@ -17,44 +17,64 @@
  * under the License.
  */
 
+gradle.ext.hive3Enabled = false
+if (System.getProperty("hive3") != null) {
+  if (JavaVersion.current() == JavaVersion.VERSION_1_8) {
+    println "*** 'hive3' flag detected - building with Hive3 / Hadoop3 ***"
+    gradle.ext.hive3Enabled = true
+  } else {
+    println "*** 'hive3' flag detected, but with JDK version other than 1.8 - 
skipping Hive3 / Hadoop3 build. ***"
+  }
+}
+
 rootProject.name = 'iceberg'
 include 'api'
 include 'common'
 include 'core'
 include 'data'
-include 'flink'
-include 'flink-runtime'
 include 'mr'
 include 'hive-runtime'
 include 'orc'
 include 'arrow'
 include 'parquet'
 include 'bundled-guava'
-include 'spark'
-include 'spark3'
-include 'spark3-runtime'
 include 'pig'
 include 'hive-metastore'
+if (gradle.ext.hive3Enabled) {
+  include 'mr-hive3'
+} else {
+  include 'mr-hive2'

Review comment:
       The reason I excluded Spark is that some unit tests were failing with 
Hive3/Hadoop3, since `iceberg-spark2`/`iceberg-spark3` pull in Hive2 as a 
transitive dependency via `org.apache.spark:spark-hive_2.11/2.12` and it fails 
to communicate with the Hive3 metastore in the tests. Flink does pass all unit 
tests but did not want to include it yet so that they don't inadvertently break 
the Hadoop3 tests - although I'm definitely open to including it if the 
community supports having a Hadoop3 version of Flink as well.
    
   I agree that hadoop3 could be a better flag, thanks for pointing it out.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to