Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/19464#discussion_r144181321 --- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala --- @@ -196,7 +196,10 @@ class HadoopRDD[K, V]( // add the credentials here as this can be called before SparkContext initialized SparkHadoopUtil.get.addCredentials(jobConf) val inputFormat = getInputFormat(jobConf) - val inputSplits = inputFormat.getSplits(jobConf, minPartitions) + var inputSplits = inputFormat.getSplits(jobConf, minPartitions) + if (sparkContext.getConf.getBoolean("spark.hadoop.filterOutEmptySplit", false)) { --- End diff -- I would suggest not to use the name started by "spark.hadoop", this kind of configurations will be treated as Hadoop configuration and set into Hadoop `Configuration`, it might be better to choose another name.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org