Github user MaxGekk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20849#discussion_r175280808
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JSONOptions.scala
 ---
    @@ -85,6 +85,12 @@ private[sql] class JSONOptions(
     
       val multiLine = 
parameters.get("multiLine").map(_.toBoolean).getOrElse(false)
     
    +  /**
    +   * Standard charset name. For example UTF-8, UTF-16 and UTF-32.
    +   * If charset is not specified (None), it will be detected automatically.
    --- End diff --
    
    Do you mean the encoding of records/lines delimiter? It depends on the 
mode. In multiline mode, jackson is able to do that. In the case of per-line 
mode, Hadoop LinerRecordReader could accept delimiters in any charsets but by 
defaults it splits input by `'\r'`, `'\n'`, and `'\r\n'` in UTF-8. This will be 
fixed in separate PRs for the issues: 
https://issues.apache.org/jira/browse/SPARK-23724 and 
https://issues.apache.org/jira/browse/SPARK-23725


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to