Nicholas,

Yes, I saw them, but they refer to maven, and I'm under the impression that
sbt is the preferred way of building spark. Is indeed maven the "right
way"? Anyway, as per your advice I ctrl-d'ed my sbt shell and have ran `mvn
-DskipTests clean package`, which completed successfully. So, indeed, in
trying to use sbt I was on a wild goose chase.

Here's a couple of glitches I'm seeing. First of all many warnings such as
the following:

[WARNING]     assert(windowedStream2.generatedRDDs.contains(Time(10000)))
[WARNING]                            ^
[WARNING]
/home/alex/git/spark/streaming/src/test/scala/org/apache/spark/streaming/BasicOperationsSuite.scala:454:
inferred existential type
scala.collection.mutable.HashMap[org.apache.spark.streaming.Time,org.apache.spark.rdd.RDD[_$2]]
forSome { type _$2 }, which cannot be expressed by wildcards,  should be
enabled
by making the implicit value scala.language.existentials visible.

[WARNING]
/home/alex/git/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/parquet/FakeParquetSerDe.scala:34:
@deprecated now takes two arguments; see the scaladoc.
[WARNING] @deprecated("No code should depend on FakeParquetHiveSerDe as it
is only intended as a " +
[WARNING]  ^

[WARNING]
/home/alex/git/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala:435:
trait Deserializer in package serde2 is deprecated: see corresponding
Javadoc for more information.
[WARNING]
Utils.getContextOrSparkClassLoader).asInstanceOf[Class[Deserializer]],
[WARNING]                                                        ^

[WARNING]
/home/alex/git/spark/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingKMeans.scala:22:
imported `StreamingKMeans' is permanently hidden by definition of object
StreamingKMeans in package mllib
[WARNING] import org.apache.spark.mllib.clustering.StreamingKMeans
Are they expected?

Also, mvn complains about not having zinc. Is this a problem?

[WARNING] Zinc server is not available at port 3030 - reverting to normal
incremental compile

Alex

On Tue, Nov 4, 2014 at 3:11 PM, Nicholas Chammas <nicholas.cham...@gmail.com
> wrote:

> FWIW, the "official" build instructions are here:
> https://github.com/apache/spark#building-spark
>
> On Tue, Nov 4, 2014 at 5:11 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> I built based on this commit today and the build was successful.
>>
>> What command did you use ?
>>
>> Cheers
>>
>> On Tue, Nov 4, 2014 at 2:08 PM, Alessandro Baretta <alexbare...@gmail.com
>> >
>> wrote:
>>
>> > Fellow Sparkers,
>> >
>> > I am new here and still trying to learn to crawl. Please, bear with me.
>> >
>> > I just pulled f90ad5d from https://github.com/apache/spark.git and am
>> > running the compile command in the sbt shell. This is the error I'm
>> seeing:
>> >
>> > [error]
>> >
>> >
>> /home/alex/git/spark/mllib/src/main/scala/org/apache/spark/mllib/linalg/Vectors.scala:32:
>> > object sql is not a member of package org.apache.spark
>> > [error] import org.apache.spark.sql.catalyst.types._
>> > [error]                         ^
>> >
>> > Am I doing something obscenely stupid is the build genuinely broken?
>> >
>> > Alex
>> >
>>
>
>

Reply via email to