[GitHub] [spark] Hisoka-X commented on a diff in pull request #42979: [SPARK-45035][SQL] Fix ignoreCorruptFiles with multiline CSV/JSON will report error
Hisoka-X commented on code in PR #42979: URL: https://github.com/apache/spark/pull/42979#discussion_r1331553477 ## sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVDataSource.scala: ## @@ -190,12 +191,19 @@ object MultiLineCSVDataSource extends CSVDataSource { parsedOptions: CSVOptions): StructType = { val csv = createBaseRdd(sparkSession, inputPaths, parsedOptions) csv.flatMap { lines => - val path = new Path(lines.getPath()) - UnivocityParser.tokenizeStream( - CodecStreams.createInputStreamWithCloseResource(lines.getConfiguration, path), -shouldDropHeader = false, -new CsvParser(parsedOptions.asParserSettings), -encoding = parsedOptions.charset) + try { +val path = new Path(lines.getPath()) +UnivocityParser.tokenizeStream( + CodecStreams.createInputStreamWithCloseResource(lines.getConfiguration, path), + shouldDropHeader = false, + new CsvParser(parsedOptions.asParserSettings), + encoding = parsedOptions.charset) + } catch { +case e @ (_: RuntimeException | _: IOException) if parsedOptions.ignoreCorruptFiles => + logWarning( +s"Skipped the rest of the content in the corrupted file: ${lines.getPath()}", e) Review Comment: How about change it to this? ```scala case e: TextParsingException if parsedOptions.ignoreCorruptFiles && e.getCause.getCause.isInstanceOf[EOFException] => ``` @HyukjinKwon -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] Hisoka-X commented on a diff in pull request #42979: [SPARK-45035][SQL] Fix ignoreCorruptFiles with multiline CSV/JSON will report error
Hisoka-X commented on code in PR #42979: URL: https://github.com/apache/spark/pull/42979#discussion_r1331553477 ## sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVDataSource.scala: ## @@ -190,12 +191,19 @@ object MultiLineCSVDataSource extends CSVDataSource { parsedOptions: CSVOptions): StructType = { val csv = createBaseRdd(sparkSession, inputPaths, parsedOptions) csv.flatMap { lines => - val path = new Path(lines.getPath()) - UnivocityParser.tokenizeStream( - CodecStreams.createInputStreamWithCloseResource(lines.getConfiguration, path), -shouldDropHeader = false, -new CsvParser(parsedOptions.asParserSettings), -encoding = parsedOptions.charset) + try { +val path = new Path(lines.getPath()) +UnivocityParser.tokenizeStream( + CodecStreams.createInputStreamWithCloseResource(lines.getConfiguration, path), + shouldDropHeader = false, + new CsvParser(parsedOptions.asParserSettings), + encoding = parsedOptions.charset) + } catch { +case e @ (_: RuntimeException | _: IOException) if parsedOptions.ignoreCorruptFiles => + logWarning( +s"Skipped the rest of the content in the corrupted file: ${lines.getPath()}", e) Review Comment: How change it to this? ```scala case e: TextParsingException if parsedOptions.ignoreCorruptFiles && e.getCause.getCause.isInstanceOf[EOFException] => ``` @HyukjinKwon -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] Hisoka-X commented on a diff in pull request #42979: [SPARK-45035][SQL] Fix ignoreCorruptFiles with multiline CSV/JSON will report error
Hisoka-X commented on code in PR #42979: URL: https://github.com/apache/spark/pull/42979#discussion_r1329759589 ## sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVDataSource.scala: ## @@ -190,12 +191,19 @@ object MultiLineCSVDataSource extends CSVDataSource { parsedOptions: CSVOptions): StructType = { val csv = createBaseRdd(sparkSession, inputPaths, parsedOptions) csv.flatMap { lines => - val path = new Path(lines.getPath()) - UnivocityParser.tokenizeStream( - CodecStreams.createInputStreamWithCloseResource(lines.getConfiguration, path), -shouldDropHeader = false, -new CsvParser(parsedOptions.asParserSettings), -encoding = parsedOptions.charset) + try { +val path = new Path(lines.getPath()) +UnivocityParser.tokenizeStream( + CodecStreams.createInputStreamWithCloseResource(lines.getConfiguration, path), + shouldDropHeader = false, + new CsvParser(parsedOptions.asParserSettings), + encoding = parsedOptions.charset) + } catch { +case e @ (_: RuntimeException | _: IOException) if parsedOptions.ignoreCorruptFiles => + logWarning( +s"Skipped the rest of the content in the corrupted file: ${lines.getPath()}", e) Review Comment: Should reduce the scope of catching errors? I follow the catch range of FileScanRDD to determine whether it is a corrupted file. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org