[GitHub] spark pull request #17023: [SPARK-19695][SQL] Throw an exception if a `colum...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/17023 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17023: [SPARK-19695][SQL] Throw an exception if a `colum...
Github user maropu commented on a diff in the pull request: https://github.com/apache/spark/pull/17023#discussion_r102624321 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala --- @@ -102,6 +102,15 @@ class JsonFileFormat extends TextBasedFileFormat with DataSourceRegister { sparkSession.sessionState.conf.sessionLocalTimeZone, sparkSession.sessionState.conf.columnNameOfCorruptRecord) +// Check a field requirement for corrupt records here to throw an exception in a driver side +dataSchema.getFieldIndex(parsedOptions.columnNameOfCorruptRecord).map { corruptFieldIndex => --- End diff -- okay! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17023: [SPARK-19695][SQL] Throw an exception if a `colum...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17023#discussion_r102619054 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala --- @@ -102,6 +102,15 @@ class JsonFileFormat extends TextBasedFileFormat with DataSourceRegister { sparkSession.sessionState.conf.sessionLocalTimeZone, sparkSession.sessionState.conf.columnNameOfCorruptRecord) +// Check a field requirement for corrupt records here to throw an exception in a driver side +dataSchema.getFieldIndex(parsedOptions.columnNameOfCorruptRecord).map { corruptFieldIndex => --- End diff -- ideally the `columnNameOfCorruptRecord` stuff has nothing to do with parser. Parser just parses the record and report error if some records are bad, and the upper-level will handle the bad records and may put the bad record in a special string column. I'm ok to keep this code snippet duplicated in 2 places. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17023: [SPARK-19695][SQL] Throw an exception if a `colum...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17023#discussion_r102603024 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala --- @@ -102,6 +102,15 @@ class JsonFileFormat extends TextBasedFileFormat with DataSourceRegister { sparkSession.sessionState.conf.sessionLocalTimeZone, sparkSession.sessionState.conf.columnNameOfCorruptRecord) +// Check a field requirement for corrupt records here to throw an exception in a driver side +dataSchema.getFieldIndex(parsedOptions.columnNameOfCorruptRecord).map { corruptFieldIndex => --- End diff -- And.. not a strong preference but maybe put this in `JacksonUtils` and rename it `JsonUtils` if @cloud-fan is okay? Maybe we could throws an exception as `IlligalArgumentException` in the first place and then capture the message with `AnalysisException` (as `JacksonUtils.verifySchema` is doing in `StructToJson`). This is not a strong opinion too. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17023: [SPARK-19695][SQL] Throw an exception if a `colum...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17023#discussion_r102600267 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala --- @@ -102,6 +102,15 @@ class JsonFileFormat extends TextBasedFileFormat with DataSourceRegister { sparkSession.sessionState.conf.sessionLocalTimeZone, sparkSession.sessionState.conf.columnNameOfCorruptRecord) +// Check a field requirement for corrupt records here to throw an exception in a driver side +dataSchema.getFieldIndex(parsedOptions.columnNameOfCorruptRecord).map { corruptFieldIndex => --- End diff -- `map` to `foreach`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17023: [SPARK-19695][SQL] Throw an exception if a `colum...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17023#discussion_r102540297 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala --- @@ -1944,4 +1944,37 @@ class JsonSuite extends QueryTest with SharedSQLContext with TestJsonData { assert(exceptionTwo.getMessage.contains("Malformed line in FAILFAST mode")) } } + + test("Throw an exception if a `columnNameOfCorruptRecord` field violates requirements") { +val columnNameOfCorruptRecord = "_unparsed" +val schema = StructType( + StructField(columnNameOfCorruptRecord, IntegerType, true) :: +StructField("a", StringType, true) :: +StructField("b", StringType, true) :: +StructField("c", StringType, true) :: Nil) +val errMsg = intercept[AnalysisException] { + spark.read +.option("mode", "PERMISSIVE") +.option("columnNameOfCorruptRecord", columnNameOfCorruptRecord) +.schema(schema) +.json(corruptRecords) +}.getMessage +assert(errMsg.startsWith("The field for corrupt records must be string type and nullable")) + +withTempPath { dir => + val path = dir.getCanonicalPath + spark.createDataFrame( +corruptRecords.map(Row(_)), new StructType().add("value", StringType) --- End diff -- `corruptRecords.toDF("value")`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17023: [SPARK-19695][SQL] Throw an exception if a `colum...
GitHub user maropu opened a pull request: https://github.com/apache/spark/pull/17023 [SPARK-19695][SQL] Throw an exception if a `columnNameOfCorruptRecord` field violates requirements ## What changes were proposed in this pull request? This pr comes from #16928 and fixed a json behaviour along with the CSV one. ## How was this patch tested? Added tests in `JsonSuite`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/maropu/spark SPARK-19695 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/17023.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #17023 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org