[ 
https://issues.apache.org/jira/browse/HIVE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-8758:
------------------------------
    Status: Open  (was: Patch Available)

> Fix hadoop-1 build [Spark Branch]
> ---------------------------------
>
>                 Key: HIVE-8758
>                 URL: https://issues.apache.org/jira/browse/HIVE-8758
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Jimmy Xiang
>             Fix For: spark-branch
>
>         Attachments: HIVE-8758.1-spark.patch
>
>
> This may mean merging patches from trunk and fixing whatever problem specific 
> to Spark branch. Here are user reported problems:
> Problem 1:
> {code}
> Hive Serde ......................................... FAILURE [  2.357 s]
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hive-serde: Compilation failure: Compilation failure:
> [ERROR] 
> /data/hive-spark/serde/src/java/org/apache/hadoop/hive/serde2/AbstractSerDe.java:[27,24]
>  cannot find symbol
> [ERROR] symbol:   class Nullable
> [ERROR] location: package javax.annotation
> [ERROR] 
> /data/hive-spark/serde/src/java/org/apache/hadoop/hive/serde2/AbstractSerDe.java:[67,36]
>  cannot find symbol
> [ERROR] symbol:   class Nullable
> [ERROR] location: class org.apache.hadoop.hive.serde2.AbstractSerDe
> {code}
> My understanding: Looks the Nullable annotation was recently added in the 
> recent branch. Added the below dependency in the project hive-serde
> {code}
> <dependency>
>     <groupId>com.google.code.findbugs</groupId>
>     <artifactId>jsr305</artifactId>
>     <version>3.0.0</version>
> </dependency>
> {code}
> Problem 2:
> After adding the dependency for hive-serde, got the below compilation error
> {code}
> [INFO] Hive Query Language ................................ FAILURE [01:35 
> min]
> /data/hive-spark/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/counter/SparkCounters.java:[35,39]
>  error: package org.apache.hadoop.mapreduce.util does not exist
> {code}
> In the dependency jar for hadoop-1 (hadoop-core-1.2.1.jar) - We do not have 
> the package “org.apache.hadoop.mapreduce.util” to circumvent it added the 
> below dependency where we had the package (not sure, it is right – I badly 
> wanted to make the build successful L)
> {code}
> <dependency>
>                 <groupId>org.apache.hadoop</groupId>
>                         <artifactId>hadoop-mapreduce-client-core</artifactId>
>                                 <version>0.23.11</version>
>                                 </dependency>
>       </dependencies>
> {code}
> Problem 3:
> After making the above change, again failed in the same project @ file 
> /data/hive-spark/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainerSerDe.java.
>  In the snippet below taken from the file, we can see the 
> “fileStatus.isFile()” is called which is not available in the 
> “org.apache.hadoop.fs.FileStatus” hadoop1 api.
> {code}
>  for (FileStatus fileStatus: fs.listStatus(folder)) {
>        Path filePath = fileStatus.getPath();
>         if (!fileStatus.isFile()) {
>           throw new HiveException("Error, not a file: " + filePath);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to