Github user jodersky commented on a diff in the pull request: https://github.com/apache/spark/pull/15257#discussion_r81257481 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala --- @@ -60,6 +76,45 @@ object Literal { } /** + * Returns the Spark SQL DataType for a given class object. Since this type needs to be resolved + * in runtime, we use match-case idioms for class objects here. However, there are similar + * functions in other files (e.g., HiveInspectors), so these functions need to merged into one. + */ + private[this] def componentTypeToDataType(clz: Class[_]): DataType = clz match { + // primitive types + case c: Class[_] if c == JavaShort.TYPE => ShortType --- End diff -- cool, I saw you updated the match. However you can remove the instance check everywhere, including further down. Basically a `case c: Class[_] ` is "equivalent" to an `c.isInstanceOf[Class[_]]`, however that is redundant since your parameter `clz` already specifies the type to be `Class[_]`
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org