Github user maropu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22324#discussion_r215461695
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala ---
    @@ -473,6 +476,27 @@ class FileBasedDataSourceSuite extends QueryTest with 
SharedSQLContext with Befo
           }
         }
       }
    +
    +  test("SPARK-25237 compute correct input metrics in FileScanRDD") {
    +    withTempPath { p =>
    +      val path = p.getAbsolutePath
    +      spark.range(1000).repartition(1).write.csv(path)
    +      val bytesReads = new mutable.ArrayBuffer[Long]()
    +      val bytesReadListener = new SparkListener() {
    +        override def onTaskEnd(taskEnd: SparkListenerTaskEnd) {
    +          bytesReads += taskEnd.taskMetrics.inputMetrics.bytesRead
    +        }
    +      }
    +      sparkContext.addSparkListener(bytesReadListener)
    +      try {
    +        spark.read.csv(path).limit(1).collect()
    +        sparkContext.listenerBus.waitUntilEmpty(1000L)
    +        assert(bytesReads.sum === 7860)
    --- End diff --
    
    yea, actually the file size is `3890`, but the hadoop API 
(`FileSystem.getAllStatistics
    ) reports that number (`3930`). I didn't look into the Hadoop code yet, so 
I don't get why. I'll dig into it later.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to