Sorry Sean you are absolutely right it supports 2.11 all o meant is there
is no release available as a standard download and that one has to build
it.  Thanks for the clairification.
-Todd

On Sunday, October 25, 2015, Sean Owen <so...@cloudera.com> wrote:

> Hm, why do you say it doesn't support 2.11? It does.
>
> It is not even this difficult; you just need a source distribution,
> and then run "./dev/change-scala-version.sh 2.11" as you say. Then
> build as normal
>
> On Sun, Oct 25, 2015 at 4:00 PM, Todd Nist <tsind...@gmail.com
> <javascript:;>> wrote:
> > Hi Bilnmek,
> >
> > Spark 1.5.x does not support Scala 2.11.7 so the easiest thing to do it
> > build it like your trying.  Here are the steps I followed to build it on
> a
> > Max OS X 10.10.5 environment, should be very similar on ubuntu.
> >
> > 1.  set theJAVA_HOME environment variable in my bash session via export
> > JAVA_HOME=$(/usr/libexec/java_home).
> > 2. Spark is easiest to build with Maven so insure maven is installed, I
> > installed 3.3.x.
> > 3.  Download the source form Spark's site and extract.
> > 4.  Change into the spark-1.5.1 folder and run:
> >        ./dev/change-scala-version.sh 2.11
> > 5.  Issue the following command to build and create a distribution;
> >
> > ./make-distribution.sh --name hadoop-2.6_scala-2.11 --tgz -Pyarn
> > -Phadoop-2.6 -Dhadoop.version=2.6.0 -Dscala-2.11 -DskipTests
> >
> > This will provide you with a a fully self-contained installation of Spark
> > for Scala 2.11 including scripts and the like.  There are some
> limitations
> > see this,
> >
> http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211
> ,
> > for what is not supported.
> >
> > HTH,
> >
> > -Todd
> >
> >
> > On Sun, Oct 25, 2015 at 10:56 AM, Bilinmek Istemiyor <
> benibi...@gmail.com <javascript:;>>
> > wrote:
> >>
> >>
> >> I am just starting out apache spark. I hava zero knowledge about the
> spark
> >> environment, scala and sbt. I have a built problems which I could not
> solve.
> >> Any help much appreciated.
> >>
> >> I am using kubuntu 14.04, java "1.7.0_80, scala 2.11.7 and spark 1.5.1
> >>
> >> I tried to compile spark from source an and receive following errors
> >>
> >> [0m[[31merror[0m] [0mimpossible to get artifacts when data has not been
> >> loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
> >> [0m[[31merror[0m] [0m(hive/*:[31mupdate[0m)
> >> java.lang.IllegalStateException: impossible to get artifacts when data
> has
> >> not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3[0m
> >> [0m[[31merror[0m] [0m(streaming-flume-sink/avro:[31mgenerate[0m)
> >> org.apache.avro.SchemaParseException: Undefined name: "strıng"[0m
> >> [0m[[31merror[0m] [0m(streaming-kafka-assembly/*:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0m(streaming-mqtt/test:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0m(assembly/*:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0m(streaming-mqtt-assembly/*:[31massembly[0m)
> >> java.util.zip.ZipException: duplicate entry: META-INF/MANIFEST.MF[0m
> >> [0m[[31merror[0m] [0mTotal time: 1128 s, completed 25.Eki.2015
> 11:00:52[0m
> >>
> >> Sorry about some strange characters. I tried to capture the output with
> >>
> >> sbt clean assembly 2>&1 | tee compile.txt
> >>
> >> compile.txt was full of these characters.  I have attached the output of
> >> full compile process "compile.txt".
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> <javascript:;>
> >> For additional commands, e-mail: user-h...@spark.apache.org
> <javascript:;>
> >
> >
>

Reply via email to