You should probably be asking the opposite question: why do you think it *should* be applied immediately? Since the driver program hasn't requested any data back (distinct generates a new RDD, it doesn't return any data), there's no need to actually compute anything yet.

As the documentation describes, if the call returns an RDD, it's transforming the data and will just keep track of the operation it eventually needs to perform. Only methods that return data back to the driver should trigger any computation.

(The one known exception is sortByKey, which really should be lazy, but apparently uses an RDD.count call in its implementation: https://spark-project.atlassian.net/browse/SPARK-1021).

March 11, 2014 at 9:49 PM
For example, is distinct() transformation lazy?

when I see the Spark source code, distinct applies a map-> reduceByKey -> map function to the RDD elements. Why is this lazy? Won't the function be applied immediately to the elements of RDD when I call someRDD.distinct?

  /**
   * Return a new RDD containing the distinct elements in this RDD.
   */
  def distinct(numPartitions: Int): RDD[T] =
    map(x => (x, null)).reduceByKey((x, y) => x, numPartitions).map(_._1)

  /**
   * Return a new RDD containing the distinct elements in this RDD.
   */
  def distinct(): RDD[T] = distinct(partitions.size)

Reply via email to