On 6/26/14, Sean Owen <so...@cloudera.com> wrote:
> Yes it does. The idea is to override the dependency if needed. I thought
> you mentioned that you had built for Hadoop 2.

I'm very confused :-(

I downloaded the Spark distro for Hadoop 2, and installed it on my
machine.  But the code doesn't have a reference to that path - it uses
sbt for dependencies.  As far as I can tell, using sbt or maven or ivy
will always result in a transitive dependency to Hadoop 1 ????
Shouldn't there be a maven option to use Hadoop 2, just like there is
a different package to download?

How does everyone solve this problem? (Surely I'm not the only one
using Hadoop 2 and sbt or maven or ivy!)

> On Jun 26, 2014 11:07 AM, "Robert James" <srobertja...@gmail.com> wrote:
>
>> Yes.  As far as I can tell, Spark seems to be including Hadoop 1 via
>> its transitive dependency:
>> http://mvnrepository.com/artifact/org.apache.spark/spark-core_2.10/1.0.0
>> - shows a dependency on Hadoop 1.0.4, which I'm perplexed by.
>>
>> On 6/26/14, Sean Owen <so...@cloudera.com> wrote:
>> > You seem to have the binary for Hadoop 2, since it was compiled
>> > expecting that TaskAttemptContext is an interface. So the error
>> > indicates that Spark is also seeing Hadoop 1 classes somewhere.
>> >
>> > On Wed, Jun 25, 2014 at 4:41 PM, Robert James <srobertja...@gmail.com>
>> > wrote:
>> >> After upgrading to Spark 1.0.0, I get this error:
>> >>
>> >>  ERROR org.apache.spark.executor.ExecutorUncaughtExceptionHandler -
>> >> Uncaught exception in thread Thread[Executor task launch
>> >> worker-2,5,main]
>> >> java.lang.IncompatibleClassChangeError: Found interface
>> >> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>> >>
>> >> I thought this was caused by a dependency on Hadoop 1.0.4 (even though
>> >> I downloaded the Spark 1.0.0 for Hadoop 2), but I can't seem to fix
>> >> it.  Any advice?
>> >
>>
>

Reply via email to