On 29 Oct 2013, at 02:47, Matei Zaharia wrote:
> Yes, we still write out data after these tasks in Spark 0.8, and it needs to
> be written out before any stage that reads it can start. The main reason is
> simplicity when there are faults, as well as more flexible scheduling (you
> don't have t
Hey everybody,
I just watched the Spark Internals presentation [1] from the December 2012 dev
meetup and have a couple of questions regarding the output of tasks before a
shuffle.
1. Can anybody confirm that the default is still to persist stage output to
RAM/disk and then have the following t
Are you running the latest Scala IDE 3.0.1 [1] with Eclipse 4.3 (Kepler)?
If you want to keep using Eclipse you will have to use version 3.0.0 of Scala
IDE (last version supporting Scala 2.9.x), which only works for the older
Eclipse 4.2 (Juno).
[1] http://scala-ide.org/download/current.html
O