Github user gerashegalov commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22213#discussion_r213489337
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -2062,8 +2062,10 @@ private[spark] object Utils extends Logging {
         try {
           val properties = new Properties()
           properties.load(inReader)
    -      properties.stringPropertyNames().asScala.map(
    -        k => (k, properties.getProperty(k).trim)).toMap
    +      properties.stringPropertyNames().asScala
    +        .map(k => (k, properties.getProperty(k)))
    --- End diff --
    
    > By ASCII I mean you can pass in ASCII number, and translate to actual 
char in the code, that will mitigate the problem here.
    
    I think I'll just keep passing the delimiter via `--conf` to Hadoop and 
everything else in a single properties to avoid dealing with manual conversion 
of ints to char.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to