Github user MaxGekk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21439#discussion_r194491865
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
 ---
    @@ -101,6 +102,13 @@ class JacksonParser(
         }
       }
     
    +  private def makeArrayRootConverter(at: ArrayType): JsonParser => 
Seq[InternalRow] = {
    +    val elemConverter = makeConverter(at.elementType)
    +    (parser: JsonParser) => parseJsonToken[Seq[InternalRow]](parser, at) {
    +      case START_ARRAY => Seq(InternalRow(convertArray(parser, 
elemConverter)))
    --- End diff --
    
    The code in line 87 returns `null` for json input `[]` if schema is 
`StructType(StructField("a", IntegerType) :: Nil)`. I would explain why we 
should return `null` in that case: we *extract* struct from the array. If the 
array is _empty_, it means there is nothing to extract and we returns `null` 
for the nothing.
    
    In case when schema is `ArrayType(...)`,  I believe we should return 
`empty` array for empty JSON array `[]`


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to