Github user MaxGekk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21439#discussion_r204553995
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
 ---
    @@ -101,6 +102,17 @@ class JacksonParser(
         }
       }
     
    +  private def makeArrayRootConverter(at: ArrayType): JsonParser => 
Seq[InternalRow] = {
    +    val elemConverter = makeConverter(at.elementType)
    +    (parser: JsonParser) => parseJsonToken[Seq[InternalRow]](parser, at) {
    +      case START_ARRAY => Seq(InternalRow(convertArray(parser, 
elemConverter)))
    +      case START_OBJECT if at.elementType.isInstanceOf[StructType] =>
    --- End diff --
    
    I can only say that this `case START_OBJECT` was added to handle the case 
when an user specified a schema as an `array` of `struct`, and a `struct` is 
found in the input json. See the existing test case which I pointed out above: 
https://github.com/apache/spark/pull/16929/files#diff-88230f171af0b7a40791a867f9dd3a36R382
 . I don't want to change the behavior in the PR and potentially break user's 
apps. What I would propose is to put the functionality under a 
`spark.sql.legacy.*` flag which could be deleted in Spark 3.0.
    
    @maropu Initially I put the behavior under a flag in `JsonToStructs` but 
Reynold asked me to support new behavior without any new sticks/flags.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to