Spark and Hadoop should be listed as 'provided' dependency in your
Maven or SBT build. But that should make it available at compile time.

On Wed, Feb 25, 2015 at 10:42 PM, boci <boci.b...@gmail.com> wrote:
> Hi,
>
> I have a little question. I want to develop a spark based application, but
> spark depend to hadoop-client library. I think it's not necessary (spark
> standalone) so I excluded from sbt file.. the result is interesting. My
> trait where I create the spark context not compiled.
>
> The error:
> ...
>  scala.reflect.internal.Types$TypeError: bad symbolic reference. A signature
> in SparkContext.class refers to term mapred
> [error] in package org.apache.hadoop which is not available.
> [error] It may be completely missing from the current classpath, or the
> version on
> [error] the classpath might be incompatible with the version used when
> compiling SparkContext.class.
> ...
>
> I used this class for integration test. I'm using windows and I don't want
> to using hadoop for integration test. How can I solve this?
>
> Thanks
> Janos
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to