Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/15751#discussion_r86317692 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala --- @@ -143,6 +143,15 @@ class CSVFileFormat extends TextBasedFileFormat with DataSourceRegister { val broadcastedHadoopConf = sparkSession.sparkContext.broadcast(new SerializableConfiguration(hadoopConf)) + if (csvOptions.failFast) { + // We can fail before starting to parse in case of "FAILFAST" mode. In case of "PERMISIVE" + // mode, it allows to read values as null for unsupported types. In case of "DROPMALFORMED" + // mode, it drops records only containing non-null values in unsupported types. We should use + // `requiredSchema` instead of whole schema `dataSchema` here to not to break the original + // behaviour. + verifySchema(requiredSchema) --- End diff -- Here, it only checks projected columns for not changing the existing behaviour (we are not really checking the other columns when parsing already).
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org