Hey All,

1. The original test infrastructure hosted by the AMPLab has been
fully restored and also expanded with many more executor slots for
tests. Thanks to Matt Massie at the Amplab for helping with this.

2. We now have a nightly build matrix across different Hadoop
versions. It appears that the Maven build is failing tests with some
of the newer Hadoop versions. If people from the community are
interested, diagnosing and fixing test issues would be welcome patches
(they are all dependency related).

https://issues.apache.org/jira/browse/SPARK-2232

3. Prashant Sharma has spent a lot of time to make it possible for our
sbt build to read dependencies from Maven. This will save us a huge
amount of headache keeping the builds consistent. I just wanted to
give a heads up to users about this - we should retain compatibility
with features of the sbt build, but if you are e.g. hooking into deep
internals of our build it may affect you. I'm hoping this can be
updated and merged in the next week:

https://github.com/apache/spark/pull/77

4. We've moved most of the documentation over to recommending users
build with Maven when creating official packages. This is just to
provide a single "reference build" of Spark since it's the one we test
and package for releases, we make sure all recursive dependencies are
correct, etc. I'd recommend that all downstream packagers use this
build.

For day-to-day development I imagine sbt will remain more popular
(repl, incremental builds, etc). Prashant's work allows us to get the
"best of both worlds" which is great.

- Patrick

Reply via email to