maropu commented on a change in pull request #23665: [SPARK-26745][SQL] Skip 
empty lines in JSON-derived DataFrames when skipParsing optimization in effect
URL: https://github.com/apache/spark/pull/23665#discussion_r251270647
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/FailureSafeParser.scala
 ##########
 @@ -55,11 +56,15 @@ class FailureSafeParser[IN](
 
   def parse(input: IN): Iterator[InternalRow] = {
     try {
-     if (skipParsing) {
-       Iterator.single(InternalRow.empty)
-     } else {
-       rawParser.apply(input).toIterator.map(row => toResultRow(Some(row), () 
=> null))
-     }
+      if (skipParsing) {
+        if (unparsedRecordIsNonEmpty(input)) {
+          Iterator.single(InternalRow.empty)
+        } else {
+          Iterator.empty
+        }
+      } else {
+        rawParser.apply(input).toIterator.map(row => toResultRow(Some(row), () 
=> null))
+      }
 
 Review comment:
   We need to modify this file? Since this is a issue in json stuffs, I think 
its better to handle this case in the json parser side. Can't we do handle this 
in the same way with CSV one?, e.g., 
   
https://github.com/apache/spark/blob/3a17c6a06bc4e8a6cae506a61d3d83696d378607/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/UnivocityParser.scala#L333

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to