GitHub user MaxGekk opened a pull request: https://github.com/apache/spark/pull/23006
[SPARK-26007][SQL] DataFrameReader.csv() respects to spark.sql.columnNameOfCorruptRecord ## What changes were proposed in this pull request? Passing current value of SQL config `spark.sql.columnNameOfCorruptRecord` to `CSVOptions` inside of `DataFrameReader`.`csv()`. ## How was this patch tested? Added a test where default value of `spark.sql.columnNameOfCorruptRecord` is changed. You can merge this pull request into a Git repository by running: $ git pull https://github.com/MaxGekk/spark-1 csv-corrupt-sql-config Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/23006.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #23006 ---- commit ff8e7aed424db2eb60f5dc43e32d1696d04e2e4b Author: Maxim Gekk <maxim.gekk@...> Date: 2018-11-11T11:41:23Z Test for sql config columnNameOfCorruptRecord commit 02a9d3e724bef7a101c0d4849510b7e30274c319 Author: Maxim Gekk <maxim.gekk@...> Date: 2018-11-11T11:58:10Z Fix the issue commit 49ea40a43481281983f1c7a50d87452c33391377 Author: Maxim Gekk <maxim.gekk@...> Date: 2018-11-11T12:08:04Z Revert unrelated changes in imports ---- --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org