Please correct me if I’m wrong but I was under the impression as per the maven 
repositories that it was just to stay more in sync with the various version of 
Hadoop.  Looking at the latest documentation 
(https://spark.apache.org/docs/latest/building-with-maven.html), there are 
multiple Hadoop versions called out.

As for the potential differences in Spark, this is more about ensuring the 
various jars and library dependencies of the correct version of Hadoop are 
included so there can be proper connectivity to Hadoop from Spark vs. any 
differences in Spark itself.   Another good reference on this topic is call out 
for Hadoop versions within github: https://github.com/apache/spark

HTH!


On September 11, 2014 at 18:39:10, Haopu Wang (hw...@qilinsoft.com) wrote:

Danny, thanks for the response.

 

I raise the question because in Spark 1.0.2, I saw one binary package for 
hadoop2, but in Spark 1.1.0, there are separate packages for hadoop 2.3 and 2.4.

That implies some difference in Spark according to hadoop version.

 

From:Denny Lee [mailto:denny.g....@gmail.com]
Sent: Friday, September 12, 2014 9:35 AM
To: user@spark.apache.org; Haopu Wang; d...@spark.apache.org; Patrick Wendell
Subject: RE: Announcing Spark 1.1.0!

 

I’m not sure if I’m completely answering your question here but I’m currently 
working (on OSX) with Hadoop 2.5 and I used the Spark 1.1 with Hadoop 2.4 
without any issues.

 

 

On September 11, 2014 at 18:11:46, Haopu Wang (hw...@qilinsoft.com) wrote:

I see the binary packages include hadoop 1, 2.3 and 2.4.
Does Spark 1.1.0 support hadoop 2.5.0 at below address?

http://hadoop.apache.org/releases.html#11+August%2C+2014%3A+Release+2.5.0+available

-----Original Message-----
From: Patrick Wendell [mailto:pwend...@gmail.com]
Sent: Friday, September 12, 2014 8:13 AM
To: d...@spark.apache.org; user@spark.apache.org
Subject: Announcing Spark 1.1.0!

I am happy to announce the availability of Spark 1.1.0! Spark 1.1.0 is
the second release on the API-compatible 1.X line. It is Spark's
largest release ever, with contributions from 171 developers!

This release brings operational and performance improvements in Spark
core including a new implementation of the Spark shuffle designed for
very large scale workloads. Spark 1.1 adds significant extensions to
the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a
JDBC server, byte code generation for fast expression evaluation, a
public types API, JSON support, and other features and optimizations.
MLlib introduces a new statistics library along with several new
algorithms and optimizations. Spark 1.1 also builds out Spark's Python
support and adds new components to the Spark Streaming module.

Visit the release notes [1] to read about the new features, or
download [2] the release today.

[1] http://spark.eu.apache.org/releases/spark-release-1-1-0.html
[2] http://spark.eu.apache.org/downloads.html

NOTE: SOME ASF DOWNLOAD MIRRORS WILL NOT CONTAIN THE RELEASE FOR SEVERAL HOURS.

Please e-mail me directly for any type-o's in the release notes or name listing.

Thanks, and congratulations!
- Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to