Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/23184#discussion_r238057161 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala --- @@ -225,4 +225,10 @@ private[sql] object SQLUtils extends Logging { } sparkSession.sessionState.catalog.listTables(db).map(_.table).toArray } + + def createArrayType(elementType: DataType): ArrayType = DataTypes.createArrayType(elementType) --- End diff -- Yea I remember that. I thought this case is a bit different from other instances actually. It reduces the code complexity in R side because R side directly calls this overridden methods. For instance, currently it's being called: ```r jschema <- callJStatic("org.apache.spark.sql.api.r.SQLUtils", "createArrayType", jschema) ``` but if we remove those, it should be like: ```r if (class(schema) == "dataType") { jschema <- callJStatic("org.apache.spark.sql.types.DataTypes", "createArrayType", schema$jobj) } else { jschema <- callJStatic("org.apache.spark.sql.api.r.SQLUtils", "createArrayType", schema$jobj) } ``` Let me try to remove this one anyway.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org