Github user ueshin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20214#discussion_r161153123
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
    @@ -237,13 +237,18 @@ class Dataset[T] private[sql](
       private[sql] def showString(
           _numRows: Int, truncate: Int = 20, vertical: Boolean = false): 
String = {
         val numRows = _numRows.max(0).min(Int.MaxValue - 1)
    -    val takeResult = toDF().take(numRows + 1)
    +    val newDf = toDF()
    +    val castExprs = newDf.schema.map { f => f.dataType match {
    +      // Since binary types in top-level schema fields have a specific 
format to print,
    +      // so we do not cast them to strings here.
    +      case BinaryType => s"`${f.name}`"
    +      case _: UserDefinedType[_] => s"`${f.name}`"
    --- End diff --
    
    How about something like:
    
    ```scala
          case udt: UserDefinedType[_] =>
            (c, evPrim, evNull) => {
              val udtTerm = ctx.addReferenceObj("udt", udt)
              s"$evPrim = 
UTF8String.fromString($udtTerm.deserialize($c).toString());"
            }
    ```



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to