[ 
https://issues.apache.org/jira/browse/SPARK-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496614#comment-15496614
 ] 

Oleksiy Sayankin commented on SPARK-17563:
------------------------------------------

After three hours of fixing I have found out that there are too many changes in 
Spark-2.0.0 API comparing to Spark-1.6.1 API to make the fix in easy way. I was 
able to fix Spark Remote Client subproject, but Hive Query Language gives me a 
lot of errors.

{code}
[INFO] Hive ............................................... SUCCESS [  0.883 s]
[INFO] Hive Shims Common .................................. SUCCESS [  2.424 s]
[INFO] Hive Shims 0.23 .................................... SUCCESS [  1.132 s]
[INFO] Hive Shims Scheduler ............................... SUCCESS [  0.299 s]
[INFO] Hive Shims ......................................... SUCCESS [  0.199 s]
[INFO] Hive Storage API ................................... SUCCESS [  0.851 s]
[INFO] Hive ORC ........................................... SUCCESS [  2.346 s]
[INFO] Hive Common ........................................ SUCCESS [  3.567 s]
[INFO] Hive Serde ......................................... SUCCESS [  2.513 s]
[INFO] Hive Metastore ..................................... SUCCESS [ 10.782 s]
[INFO] Hive Ant Utilities ................................. SUCCESS [  0.818 s]
[INFO] Hive Llap Common ................................... SUCCESS [  0.859 s]
[INFO] Hive Llap Client ................................... SUCCESS [  0.337 s]
[INFO] Hive Llap Tez ...................................... SUCCESS [  0.525 s]
[INFO] Spark Remote Client ................................ SUCCESS [  1.547 s]
[INFO] Hive Query Language ................................ FAILURE [ 19.686 s]
[INFO] Hive Service ....................................... SKIPPED
[INFO] Hive Accumulo Handler .............................. SKIPPED
[INFO] Hive JDBC .......................................... SKIPPED
[INFO] Hive Beeline ....................................... SKIPPED
[INFO] Hive CLI ........................................... SKIPPED
[INFO] Hive Contrib ....................................... SKIPPED
[INFO] Hive HBase Handler ................................. SKIPPED
[INFO] Hive HCatalog ...................................... SKIPPED
[INFO] Hive HCatalog Core ................................. SKIPPED
[INFO] Hive HCatalog Pig Adapter .......................... SKIPPED
[INFO] Hive HCatalog Server Extensions .................... SKIPPED
[INFO] Hive HCatalog Webhcat Java Client .................. SKIPPED
[INFO] Hive HCatalog Webhcat .............................. SKIPPED
[INFO] Hive HCatalog Streaming ............................ SKIPPED
[INFO] Hive HPL/SQL ....................................... SKIPPED
[INFO] Hive HWI ........................................... SKIPPED
[INFO] Hive ODBC .......................................... SKIPPED
[INFO] Hive Llap Server ................................... SKIPPED
[INFO] Hive Shims Aggregator .............................. SKIPPED
[INFO] Hive TestUtils ..................................... SKIPPED
[INFO] Hive Packaging ..................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 49.643 s
[INFO] Finished at: 2016-09-16T18:27:24+03:00
[INFO] Final Memory: 154M/2994M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hive-exec: Compilation failure: Compilation failure:
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveReduceFunction.java:[28,8]
 org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunction is not abstract and 
does not override abstract method 
call(java.util.Iterator<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,java.lang.Iterable<org.apache.hadoop.io.BytesWritable>>>)
 in org.apache.spark.api.java.function.PairFlatMapFunction
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveReduceFunction.java:[40,3]
 
call(java.util.Iterator<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,java.lang.Iterable<org.apache.hadoop.io.BytesWritable>>>)
 in org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunction cannot implement 
call(T) in org.apache.spark.api.java.function.PairFlatMapFunction
[ERROR] return type 
java.lang.Iterable<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable>>
 is not compatible with 
java.util.Iterator<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable>>
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveReduceFunction.java:[38,3]
 method does not override or implement a method from a supertype
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java:[64,18]
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.ShuffleFunction is not 
abstract and does not override abstract method 
call(java.util.Iterator<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable>>)
 in org.apache.spark.api.java.function.PairFlatMapFunction
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java:[71,63]
 
call(java.util.Iterator<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable>>)
 in org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.ShuffleFunction cannot 
implement call(T) in org.apache.spark.api.java.function.PairFlatMapFunction
[ERROR] return type 
java.lang.Iterable<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,java.lang.Iterable<org.apache.hadoop.io.BytesWritable>>>
 is not compatible with 
java.util.Iterator<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,java.lang.Iterable<org.apache.hadoop.io.BytesWritable>>>
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java:[70,5]
 method does not override or implement a method from a supertype
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/LocalSparkJobStatus.java:[183,44]
 cannot find symbol
[ERROR] symbol:   method isEmpty()
[ERROR] location: class org.apache.spark.executor.InputMetrics
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/LocalSparkJobStatus.java:[185,54]
 cannot find symbol
[ERROR] symbol:   method get()
[ERROR] location: class org.apache.spark.executor.InputMetrics
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/LocalSparkJobStatus.java:[187,97]
 incompatible types
[ERROR] required: scala.Option<org.apache.spark.executor.ShuffleReadMetrics>
[ERROR] found:    org.apache.spark.executor.ShuffleReadMetrics
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/LocalSparkJobStatus.java:[195,100]
 incompatible types
[ERROR] required: scala.Option<org.apache.spark.executor.ShuffleWriteMetrics>
[ERROR] found:    org.apache.spark.executor.ShuffleWriteMetrics
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveMapFunction.java:[30,8]
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunction is not abstract and does 
not override abstract method 
call(java.util.Iterator<scala.Tuple2<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable>>)
 in org.apache.spark.api.java.function.PairFlatMapFunction
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveMapFunction.java:[42,3]
 
call(java.util.Iterator<scala.Tuple2<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable>>)
 in org.apache.hadoop.hive.ql.exec.spark.HiveMapFunction cannot implement 
call(T) in org.apache.spark.api.java.function.PairFlatMapFunction
[ERROR] return type 
java.lang.Iterable<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable>>
 is not compatible with 
java.util.Iterator<scala.Tuple2<org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable>>
[ERROR] 
/home/osayankin/git/myrepo/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveMapFunction.java:[40,3]
 method does not override or implement a method from a supertyp
{code}

> Add org/apache/spark/JavaSparkListener to make Spark-2.0.0 work with 
> Hive-2.X.X
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-17563
>                 URL: https://issues.apache.org/jira/browse/SPARK-17563
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Oleksiy Sayankin
>
> According to https://issues.apache.org/jira/browse/SPARK-14358 
> JavaSparkListener was deleted from Spark-2.0.0, but Hive-2.X.X uses 
> JavaSparkListener
> {code}
> package org.apache.hadoop.hive.ql.exec.spark.status.impl;
> import ...
> public class JobMetricsListener extends JavaSparkListener {
> {code}
> Configuring Hive-2.X.X on Spark-2.0.0 will give an exception:
> {code}
> 2016-09-16T11:20:57,474 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(593)) - java.lang.NoClassDefFoundError: 
> org/apache/spark/JavaSparkListener
> {code}
> Please add JavaSparkListener into Spark-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to