Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20800#discussion_r187831971
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
    @@ -511,6 +511,14 @@ class Dataset[T] private[sql](
        */
       def isLocal: Boolean = logicalPlan.isInstanceOf[LocalRelation]
     
    +  /**
    +   * Returns true if the `Dataset` is empty.
    +   *
    +   * @group basic
    +   * @since 2.4.0
    +   */
    +  def isEmpty: Boolean = rdd.isEmpty()
    --- End diff --
    
    `RDD#isEmpty` is pretty effective, it just checks if all the partitions are 
empty, without loading the data. The problem is how to build an RDD from 
`Dataset`, which minimize the cost of building the `Iterator`.
    
    It seems `Dataset#rdd` is not good enough, e.g., if we have a `Filter` in 
the query, we may do a full scan(no column pruning) for the underlying files.
    
    Doing a count is not perfect either. Ideally we can stop as soon as we see 
one record.
    
    I'd suggest doing a `limit 1` first and then count.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to