Hi guys,
I ran into the same exception (while trying the same example), and after
overriding hadoop-client artifact in my pom.xml, I got another error
(below).
System config:
ubuntu 12.04
intellijj 13.
scala 2.10.3
maven:
dependency
groupIdorg.apache.spark/groupId
Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but
interface was expected is the classic error meaning you compiled
against Hadoop 1, but are running against Hadoop 2
I think you need to override the hadoop-client artifact that Spark
depends on to be a Hadoop 2.x version.
On Tue,
Hi
Set up project under Eclipse using Maven:
dependency
groupIdorg.apache.spark/groupId
artifactIdspark-core_2.10/artifactId
version1.0.0/version
/dependency
Simple example fails:
def main(args: Array[String]): Unit = {
Wow! What a quick reply!
adding
dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-client/artifactId
version2.4.0/version
/dependency
solved the problem.
But now I get
14/06/03 19:52:50 ERROR Shell: Failed to locate
I'd try the internet / SO first -- these are actually generic
Hadoop-related issues. Here I think you don't have HADOOP_HOME or
similar set.
http://stackoverflow.com/questions/19620642/failed-to-locate-the-winutils-binary-in-the-hadoop-binary-path
On Tue, Jun 3, 2014 at 5:54 PM, toivoa
Yeah unfortunately Hadoop 2 requires these binaries on Windows. Hadoop 1 runs
just fine without them.
Matei
On Jun 3, 2014, at 10:33 AM, Sean Owen so...@cloudera.com wrote:
I'd try the internet / SO first -- these are actually generic
Hadoop-related issues. Here I think you don't have