[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-06-11 Thread Rick Moritz (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582150#comment-14582150
 ] 

Rick Moritz commented on SPARK-6511:


Thanks, that was timely (especially since this issue does not show up nearly as 
far up the results list for my search as the eventual commit).
A note: My HDP hadoop-2.6.0 does not support the --config option, so care has 
to be taken when hard-wiring such a feature into the build script.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell
Assignee: Patrick Wendell
 Fix For: 1.4.0


 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-06-11 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582158#comment-14582158
 ] 

Patrick Wendell commented on SPARK-6511:


Thanks [~RPCMoritz] - The --config option was suggested in code review by 
[~vanzin]. Is that specific to CDH? If so we should document that.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell
Assignee: Patrick Wendell
 Fix For: 1.4.0


 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-06-11 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582219#comment-14582219
 ] 

Marcelo Vanzin commented on SPARK-6511:
---

To be fair I did not suggest that. :-) And I assume you're talking about the 
--config option to the hadoop command, which is the only one I see in the PR. 
This is from branch-2.6 of the Apache Hadoop repo:

{code}
hadoop-common-project/hadoop-common/src/main/bin/hadoop:  echo Usage: hadoop 
[--config confdir] COMMAND
{code}

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell
Assignee: Patrick Wendell
 Fix For: 1.4.0


 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-06-11 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582224#comment-14582224
 ] 

Marcelo Vanzin commented on SPARK-6511:
---

bq. To be fair I did not suggest that.

Turns out I did (in this bug, not in the PR), d'oh. But anyway, that option is 
part of the standard Hadoop code.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell
Assignee: Patrick Wendell
 Fix For: 1.4.0


 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-06-11 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582227#comment-14582227
 ] 

Marcelo Vanzin commented on SPARK-6511:
---

Sorry for all the noise; I just noticed what's wrong. Instead of:

{code}
export SPARK_DIST_CLASSPATH=$(hadoop classpath --config /path/to/configs) 
{code}

It should be:

{code}
+export SPARK_DIST_CLASSPATH=$(hadoop --config /path/to/configs classpath) 
{code}

Should have caught that during review. :-/

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell
Assignee: Patrick Wendell
 Fix For: 1.4.0


 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-06-11 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582364#comment-14582364
 ] 

Apache Spark commented on SPARK-6511:
-

User 'vanzin' has created a pull request for this issue:
https://github.com/apache/spark/pull/6766

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell
Assignee: Patrick Wendell
 Fix For: 1.4.0


 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-06-09 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14579457#comment-14579457
 ] 

Apache Spark commented on SPARK-6511:
-

User 'pwendell' has created a pull request for this issue:
https://github.com/apache/spark/pull/6729

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell
Assignee: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-04-13 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493183#comment-14493183
 ] 

Patrick Wendell commented on SPARK-6511:


Just as an example I tried to wire Spark to work with stock Hadoop 2.6. Here is 
how I got it running after doing a hadoop-provided build. This is pretty 
clunky, so I wonder if we should just support setting HADOOP_HOME or something 
and we can automatically find and add the jar files present within that folder.

{code}
export SPARK_DIST_CLASSPATH=$(find /tmp/hadoop-2.6.0/ -name *.jar | tr \n ;)
./bin/spark-shell
{code}

[~vanzin] for your CDH packages, what do you end up setting 
SPARK_DIST_CLASSPATH to?

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-04-13 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493200#comment-14493200
 ] 

Sean Owen commented on SPARK-6511:
--

Yeah that might be the fastest way to find all the jars at once. They occur in 
various places in the raw Hadoop distro. That's really not too bad, this one 
liner. I don't know if it's so great to start then also modifying the classpath 
based on HADOOP_HOME as this might not be what the end user wants or interfere 
with an explicitly configured classpath.

In something like CDH they're all laid out in one directory, per components, so 
are easier to find, but that isn't much different. I don't see that the distro 
sets SPARK_DIST_CLASSPATH but sets things like SPARK_LIBRARY_PATH in 
spark-env.sh to ${SPARK_HOME}/lib. I actually don't see where the Hadoop deps 
come in but it is going to be something similar. The effect is about the same, 
to add all of the Hadoop client and YARN jars to the classpath too.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-04-13 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493274#comment-14493274
 ] 

Patrick Wendell commented on SPARK-6511:


Can we just run HADOOP_HOME/bin/hadoop classpath and then capture the result? 
I'm wondering if there is a standard interface here we can expect most Hadoop 
distributions to have.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-04-13 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493283#comment-14493283
 ] 

Marcelo Vanzin commented on SPARK-6511:
---

I think {{hadoop classpath}} would be safer w.r.t. compatibility, if you don't 
mind the extra overhead (it launches a JVM). One thing to remember in that case 
is to use the {{--config}} parameter to point to the actual config directory 
being used.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-04-13 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493217#comment-14493217
 ] 

Marcelo Vanzin commented on SPARK-6511:
---

We add a bunch of things to that variable, but the main thing is this:

{code}
  $HADOOP_HOME/client/*
{code}

I'm not sure whether other distributions place libraries in that location, or 
if that's a CDH-only thing.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-04-13 Thread Kannan Rajah (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493585#comment-14493585
 ] 

Kannan Rajah commented on SPARK-6511:
-

As requested by Patrick, here is an example of what we use in spark-env.sh for 
MapR distribution.

MAPR_HADOOP_CLASSPATH=`hadoop classpath`
MAPR_SPARK_CLASSPATH=$MAPR_HADOOP_CLASSPATH:$MAPR_HADOOP_HBASE_VERSION

MAPR_HADOOP_JNI_PATH=`hadoop jnipath`

export SPARK_LIBRARY_PATH=$MAPR_HADOOP_JNI_PATH

SPARK_SUBMIT_CLASSPATH=$SPARK_SUBMIT_CLASSPATH:$MAPR_SPARK_CLASSPATH
SPARK_SUBMIT_LIBRARY_PATH=$SPARK_SUBMIT_LIBRARY_PATH:$MAPR_HADOOP_JNI_PATH

export SPARK_SUBMIT_CLASSPATH
export SPARK_SUBMIT_LIBRARY_PATH


 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6511) Publish hadoop provided build with instructions for different distros

2015-04-13 Thread Kannan Rajah (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493592#comment-14493592
 ] 

Kannan Rajah commented on SPARK-6511:
-

[~pwendell] Just wanted to let you know that we also have a way to add hive and 
hbase jars to the classpath. This is useful when a setup has multiple versions 
of hive and hbase installed, but a Spark version will only work with specific 
version. We have some utility scripts to generate the right classpath entries 
based on a supported version of hive, hbase. If you think this will be useful 
in Apache distribution, I can create a JIRA and share the code. At a high 
level, there are 3 files:

- compatibility.version: File that holds supported versions for each ecosystem 
component.
hive_versions=0.13,0.12
hbase_versions=0.98

- compatible_version.sh: Returns the compatible version for a component by 
looking up compatibilty.version file. The first version that is available on 
the node is used.

- generate_classpath.sh: Uses the above 2 files to generate the classpath. This 
script is used in spark-env.sh to generate classpath based on hive and hbase.

 Publish hadoop provided build with instructions for different distros
 ---

 Key: SPARK-6511
 URL: https://issues.apache.org/jira/browse/SPARK-6511
 Project: Spark
  Issue Type: Improvement
  Components: Build
Reporter: Patrick Wendell

 Currently we publish a series of binaries with different Hadoop client jars. 
 This mostly works, but some users have reported compatibility issues with 
 different distributions.
 One improvement moving forward might be to publish a binary build that simply 
 asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
 it would work across multiple distributions, even if they have subtle 
 incompatibilities with upstream Hadoop.
 I think a first step for this would be to produce such a build for the 
 community and see how well it works. One potential issue is that our fancy 
 excludes and dependency re-writing won't work with the simpler append 
 Hadoop's classpath to Spark. Also, how we deal with the Hive dependency is 
 unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
 for dependency conflicts) or do we allow for linking against vanilla Hive at 
 runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org