Github user viirya commented on the issue:

    https://github.com/apache/spark/pull/20648
  
    @HyukjinKwon From the document of `DataFrameReader.csv`, the behavior of 
CSV reader isn't consistent with the document.
    
    ```
    `PERMISSIVE` : sets other fields to `null` when it meets a corrupted 
record, and puts
    the malformed string into a field configured by `columnNameOfCorruptRecord`.
    ```
    
    With respect to the document, I think we may need to disable it for CSV 
too. What do you think?



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to