[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14493592#comment-14493592
 ] 

Kannan Rajah commented on SPARK-6511:
-------------------------------------

[~pwendell] Just wanted to let you know that we also have a way to add hive and 
hbase jars to the classpath. This is useful when a setup has multiple versions 
of hive and hbase installed, but a Spark version will only work with specific 
version. We have some utility scripts to generate the right classpath entries 
based on a supported version of hive, hbase. If you think this will be useful 
in Apache distribution, I can create a JIRA and share the code. At a high 
level, there are 3 files:

- compatibility.version: File that holds supported versions for each ecosystem 
component.
hive_versions=0.13,0.12
hbase_versions=0.98

- compatible_version.sh: Returns the compatible version for a component by 
looking up compatibilty.version file. The first version that is available on 
the node is used.

- generate_classpath.sh: Uses the above 2 files to generate the classpath. This 
script is used in spark-env.sh to generate classpath based on hive and hbase.

> Publish "hadoop provided" build with instructions for different distros
> -----------------------------------------------------------------------
>
>                 Key: SPARK-6511
>                 URL: https://issues.apache.org/jira/browse/SPARK-6511
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>            Reporter: Patrick Wendell
>
> Currently we publish a series of binaries with different Hadoop client jars. 
> This mostly works, but some users have reported compatibility issues with 
> different distributions.
> One improvement moving forward might be to publish a binary build that simply 
> asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
> it would work across multiple distributions, even if they have subtle 
> incompatibilities with upstream Hadoop.
> I think a first step for this would be to produce such a build for the 
> community and see how well it works. One potential issue is that our fancy 
> excludes and dependency re-writing won't work with the simpler "append 
> Hadoop's classpath to Spark". Also, how we deal with the Hive dependency is 
> unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
> for dependency conflicts) or do we allow for linking against vanilla Hive at 
> runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to