Thanks dude... I think I will pull up a docker container for integration
test

----------------------------------------------------------------------------------------------------------------------------------
Skype: boci13, Hangout: boci.b...@gmail.com

On Thu, Feb 26, 2015 at 12:22 AM, Sean Owen <so...@cloudera.com> wrote:

> Yes, been on the books for a while ...
> https://issues.apache.org/jira/browse/SPARK-2356
> That one just may always be a known 'gotcha' in Windows; it's kind of
> a Hadoop gotcha. I don't know that Spark 100% works on Windows and it
> isn't tested on Windows.
>
> On Wed, Feb 25, 2015 at 11:05 PM, boci <boci.b...@gmail.com> wrote:
> > Thanks your fast answer...
> > in windows it's not working, because hadoop (surprise suprise) need
> > winutils.exe. Without this it's not working, but if you not set the
> hadoop
> > directory You simply get
> >
> > 15/02/26 00:03:16 ERROR Shell: Failed to locate the winutils binary in
> the
> > hadoop binary path
> > java.io.IOException: Could not locate executable null\bin\winutils.exe in
> > the Hadoop binaries.
> >
> > b0c1
> >
> >
> >
> ----------------------------------------------------------------------------------------------------------------------------------
> > Skype: boci13, Hangout: boci.b...@gmail.com
> >
> > On Wed, Feb 25, 2015 at 11:50 PM, Sean Owen <so...@cloudera.com> wrote:
> >>
> >> Spark and Hadoop should be listed as 'provided' dependency in your
> >> Maven or SBT build. But that should make it available at compile time.
> >>
> >> On Wed, Feb 25, 2015 at 10:42 PM, boci <boci.b...@gmail.com> wrote:
> >> > Hi,
> >> >
> >> > I have a little question. I want to develop a spark based application,
> >> > but
> >> > spark depend to hadoop-client library. I think it's not necessary
> (spark
> >> > standalone) so I excluded from sbt file.. the result is interesting.
> My
> >> > trait where I create the spark context not compiled.
> >> >
> >> > The error:
> >> > ...
> >> >  scala.reflect.internal.Types$TypeError: bad symbolic reference. A
> >> > signature
> >> > in SparkContext.class refers to term mapred
> >> > [error] in package org.apache.hadoop which is not available.
> >> > [error] It may be completely missing from the current classpath, or
> the
> >> > version on
> >> > [error] the classpath might be incompatible with the version used when
> >> > compiling SparkContext.class.
> >> > ...
> >> >
> >> > I used this class for integration test. I'm using windows and I don't
> >> > want
> >> > to using hadoop for integration test. How can I solve this?
> >> >
> >> > Thanks
> >> > Janos
> >> >
> >
> >
>

Reply via email to