[ 
https://issues.apache.org/jira/browse/SPARK-23534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16388761#comment-16388761
 ] 

Saisai Shao edited comment on SPARK-23534 at 3/7/18 12:37 AM:
--------------------------------------------------------------

I don't think so. Spark uses its own fork hive version (hive-1.2.1.spark2), 
which doesn't include HIVE-15016 and HIVE-18550, these two patches only landed 
in Hive community's Hive, not Spark's Hive. Unless we shift to use Hive 
community's Hive, or patch our own forked hive, then this will not be a blocker.


was (Author: jerryshao):
I don't think so. Spark uses its own fork hive version (hive-1.2.1.spark2), 
which doesn't include HIVE-15016 and HIVE-18550, these two patches only landed 
in Hive community's Hive, not Spark's Hive. Unless we shift to use Hive 
community's Hive, or path our own forked hive, then this will not be a blocker.

> Spark run on Hadoop 3.0.0
> -------------------------
>
>                 Key: SPARK-23534
>                 URL: https://issues.apache.org/jira/browse/SPARK-23534
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 2.3.0
>            Reporter: Saisai Shao
>            Priority: Major
>
> Major Hadoop vendors already/will step in Hadoop 3.0. So we should also make 
> sure Spark can run with Hadoop 3.0. This Jira tracks the work to make Spark 
> run on Hadoop 3.0.
> The work includes:
>  # Add a Hadoop 3.0.0 new profile to make Spark build-able with Hadoop 3.0.
>  # Test to see if there's dependency issues with Hadoop 3.0.
>  # Investigating the feasibility to use shaded client jars (HADOOP-11804).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to