Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22881#discussion_r229154733
  
    --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala 
---
    @@ -471,4 +473,42 @@ object SparkHadoopUtil {
           hadoopConf.set(key.substring("spark.hadoop.".length), value)
         }
       }
    +
    +
    +  lazy val builderReflection: Option[(Class[_], Method, Method)] = Try {
    +    val cls = Utils.classForName(
    +      
"org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder")
    +    (cls, cls.getMethod("replicate"), cls.getMethod("build"))
    +  }.toOption
    +
    +  // scalastyle:off line.size.limit
    +  /**
    +   * Create a path that uses replication instead of erasure coding, 
regardless of the default
    +   * configuration in hdfs for the given path.  This can be helpful as 
hdfs ec doesn't support
    --- End diff --
    
    nit: `ec` -> `erasure coding`


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to