I modified my pom.xml according to the Spark pom.xml. It is working right now.
Hadoop2 classes are no longer packaged into my jar. Thanks.
From: eyc...@hotmail.com
To: so...@cloudera.com
CC: user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Sat, 24 Jan 2015 07:30
for duplicate)
[INFO]\- (junit:junit:jar:4.11:provided - omitted for duplicate)
From: so...@cloudera.com
Date: Sat, 24 Jan 2015 09:46:02 +
Subject: Re: spark 1.1.0 save data to hdfs failed
To: eyc...@hotmail.com
CC: user@spark.apache.org
Hadoop 2's artifact is hadoop-common
@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Fri, 23 Jan 2015 11:17:36 -0800
Thanks. I looked at the dependency tree. I did not see any dependent jar
of hadoop-core from hadoop2. However the jar built from maven has the
class:
org/apache/hadoop/mapreduce/task
suggestion how to resolve it?
Thanks.
From: so...@cloudera.com
Date: Fri, 23 Jan 2015 14:01:45 +
Subject: Re: spark 1.1.0 save data to hdfs failed
To: eyc...@hotmail.com
CC: user@spark.apache.org
These are all definitely symptoms of mixing incompatible versions of
libraries.
I'm
:41:12 +
Subject: Re: spark 1.1.0 save data to hdfs failed
To: eyc...@hotmail.com
CC: user@spark.apache.org
So, you should not depend on Hadoop artifacts unless you use them
directly. You should mark Hadoop and Spark deps as provided. Then the
cluster's version is used at runtime
These are all definitely symptoms of mixing incompatible versions of libraries.
I'm not suggesting you haven't excluded Spark / Hadoop, but, this is
not the only way Hadoop deps get into your app. See my suggestion
about investigating the dependency tree.
On Fri, Jan 23, 2015 at 1:53 PM, ey-chih
?
Date: Fri, 23 Jan 2015 17:01:48 +
Subject: RE: spark 1.1.0 save data to hdfs failed
From: so...@cloudera.com
To: eyc...@hotmail.com
Are you receiving my replies? I have suggested a resolution. Look at the
dependency tree next.
On Jan 23, 2015 2:43 PM, ey-chih chow eyc...@hotmail.com wrote
)
... 6 moreFrom: eyc...@hotmail.com
To: so...@cloudera.com
CC: yuzhih...@gmail.com; user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Thu, 22 Jan 2015 17:05:26 -0800
Thanks. But after I replace the maven dependence from
dependency
So, you should not depend on Hadoop artifacts unless you use them
directly. You should mark Hadoop and Spark deps as provided. Then the
cluster's version is used at runtime with spark-submit. That's the
usual way to do it, which works.
If you need to embed Spark in your app and are running it
, it is related to hadoop2, hadoop2-yarn, and hadoop1. Any
suggestion how to resolve it?
Thanks.
From: so...@cloudera.com
Date: Fri, 23 Jan 2015 14:01:45 +
Subject: Re: spark 1.1.0 save data to hdfs failed
To: eyc...@hotmail.com
CC: user@spark.apache.org
These are all definitely
{ Class.forName(first)} catch { case e:
ClassNotFoundException =Class.forName(second)} }
From: eyc...@hotmail.com
To: so...@cloudera.com
CC: user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Fri, 23 Jan 2015 06:43:00 -0800
I looked
...@gmail.com
CC: user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Wed, 21 Jan 2015 23:12:56 -0800
The hdfs release should be hadoop 1.0.4.
Ey-Chih Chow
Date: Wed, 21 Jan 2015 16:56:25 -0800
Subject: Re: spark 1.1.0 save data to hdfs failed
From: yuzhih...@gmail.com
mismatch from 10.33.140.233:53776 got version 9 expected version
4
What should I do to fix this?
Thanks.
Ey-Chih
From: eyc...@hotmail.com
To: yuzhih...@gmail.com
CC: user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Wed, 21 Jan
: spark 1.1.0 save data to hdfs failed
To: eyc...@hotmail.com
CC: yuzhih...@gmail.com; user@spark.apache.org
It means your client app is using Hadoop 2.x and your HDFS is Hadoop 1.x.
On Thu, Jan 22, 2015 at 10:32 PM, ey-chih chow eyc...@hotmail.com wrote:
I looked into the namenode log
Hi,
I used the following fragment of a scala program to save data to hdfs:
contextAwareEvents
.map(e = (new AvroKey(e), null))
.saveAsNewAPIHadoopFile(hdfs:// + masterHostname + :9000/ETL/output/
+ dateDir,
classOf[AvroKey[GenericRecord]],
What hdfs release are you using ?
Can you check namenode log around time of error below to see if there is
some clue ?
Cheers
On Wed, Jan 21, 2015 at 4:51 PM, ey-chih chow eyc...@hotmail.com wrote:
Hi,
I used the following fragment of a scala program to save data to hdfs:
The hdfs release should be hadoop 1.0.4.
Ey-Chih Chow
Date: Wed, 21 Jan 2015 16:56:25 -0800
Subject: Re: spark 1.1.0 save data to hdfs failed
From: yuzhih...@gmail.com
To: eyc...@hotmail.com
CC: user@spark.apache.org
What hdfs release are you using ?
Can you check namenode log around time
17 matches
Mail list logo