Github user rajeshbalamohan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11978#discussion_r57537799
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -979,6 +979,7 @@ class SparkContext(config: SparkConf) extends Logging 
with ExecutorAllocationCli
         // A Hadoop configuration can be about 10 KB, which is pretty big, so 
broadcast it.
         val confBroadcast = broadcast(new 
SerializableConfiguration(hadoopConfiguration))
         val setInputPathsFunc = (jobConf: JobConf) => 
FileInputFormat.setInputPaths(jobConf, path)
    +    clean(setInputPathsFunc)
    --- End diff --
    
    Thanks @srowen. Yes, for invocations via sc.textFile.  Adding additional 
method like following and passing initLocalJobConfFuncOpt to it can help avoid 
closure cleaning in this scenario.  However, this would call for changes in all 
other places where sc.textFile is invoked.  Intension was to allow user to make 
use of HadoopRDD directly (if needed) without having to incur the cost of 
closure cleaning (e.g in sql modules). Hence did not make those additional 
changes.
    
    ```
      def newTextFile(
          path: String,
          initLocalJobConfFuncOpt: Option[JobConf => Unit],
          minPartitions: Int = defaultMinPartitions): RDD[String] = withScope {
        assertNotStopped()
        hadoopFile(path, classOf[TextInputFormat], initLocalJobConfFuncOpt,
          classOf[LongWritable], classOf[Text],
          minPartitions).map(pair => pair._2.toString).setName(path)
      }
    
    
      def hadoopFile[K, V](
          path: String,
          inputFormatClass: Class[_ <: InputFormat[K, V]],
          initLocalJobConfFuncOpt: Option[JobConf => Unit],
          keyClass: Class[K],
          valueClass: Class[V],
          minPartitions: Int = defaultMinPartitions): RDD[(K, V)] = withScope {
        assertNotStopped()
        // A Hadoop configuration can be about 10 KB, which is pretty big, so 
broadcast it.
        val confBroadcast = broadcast(new 
SerializableConfiguration(hadoopConfiguration))
        new HadoopRDD(
          this,
          confBroadcast,
          initLocalJobConfFuncOpt,
          inputFormatClass,
          keyClass,
          valueClass,
          minPartitions).setName(path)
      }
    
    e.g
      sc.newTextFile(tmpFilePath, Some(setInputPathsFunc), 4).count()
    ```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to