Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20437#discussion_r164968292
  
    --- Diff: 
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
 ---
    @@ -157,7 +157,7 @@ class FileInputDStream[K, V, F <: NewInputFormat[K, V]](
         val metadata = Map(
           "files" -> newFiles.toList,
           StreamInputInfo.METADATA_KEY_DESCRIPTION -> newFiles.mkString("\n"))
    -    val inputInfo = StreamInputInfo(id, 0, metadata)
    +    val inputInfo = StreamInputInfo(id, rdds.map(_.count).sum, metadata)
    --- End diff --
    
    This will kick off a new Spark job to read files and count, which will 
bring in obvious overhead. Whereas `count` in `DirectKafkaInputDStream` only 
calculates offsets.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to