HyukjinKwon commented on a change in pull request #23665: [SPARK-26745][SQL] 
Skip empty lines in JSON-derived DataFrames when skipParsing optimization in 
effect
URL: https://github.com/apache/spark/pull/23665#discussion_r251279489
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/FailureSafeParser.scala
 ##########
 @@ -55,11 +56,15 @@ class FailureSafeParser[IN](
 
   def parse(input: IN): Iterator[InternalRow] = {
     try {
-     if (skipParsing) {
-       Iterator.single(InternalRow.empty)
-     } else {
-       rawParser.apply(input).toIterator.map(row => toResultRow(Some(row), () 
=> null))
-     }
+      if (skipParsing) {
+        if (unparsedRecordIsNonEmpty(input)) {
 
 Review comment:
   Yea, that's safer approach if possible but to do that, we should manually 
check the input like the current PR. It doesn't looks a good idea to me to 
check input outside of `JacksonParser`.
   
   The problem is, we cannot distinguish the cases below without parsing:
   
   ```
   [{...}, {...}]
   ```
   
   ```
   []
   ```
   
   ```
   {...}
   ```
   
   ```
   # empty string
   ```
   
   One line (`input: IN`) can be, 0 record, 1 record and multiple records.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to