Github user michalsenkyr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22646#discussion_r229093388
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
    @@ -1115,9 +1126,38 @@ object SQLContext {
                 })
             }
         }
    -    def createConverter(cls: Class[_], dataType: DataType): Any => Any = 
dataType match {
    -      case struct: StructType => createStructConverter(cls, 
struct.map(_.dataType))
    -      case _ => CatalystTypeConverters.createToCatalystConverter(dataType)
    +    def createConverter(t: Type, dataType: DataType): Any => Any = (t, 
dataType) match {
    --- End diff --
    
    I took a quick look at `CatalystTypeConverters` and I believe there would 
be a problem in not being able to reliably distinguish Java beans from other 
arbitrary classes. We might use setters or set fields directly to objects which 
would not be prepared for such manipulation, potentially creating hard to find 
errors. This method already assumes a Java bean so that problem is not present 
here. Isn't that so?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to