Github user henryr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19774#discussion_r152099227
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
    @@ -689,6 +689,11 @@ case class DescribeColumnCommand(
           buffer += Row("distinct_count", 
cs.map(_.distinctCount.toString).getOrElse("NULL"))
           buffer += Row("avg_col_len", 
cs.map(_.avgLen.toString).getOrElse("NULL"))
           buffer += Row("max_col_len", 
cs.map(_.maxLen.toString).getOrElse("NULL"))
    +      buffer ++= cs.flatMap(_.histogram.map { hist =>
    +        val header = Row("histogram", s"height: ${hist.height}, 
num_of_bins: ${hist.bins.length}")
    +        Seq(header) ++ hist.bins.map(bin =>
    +          Row("", s"lower_bound: ${bin.lo}, upper_bound: ${bin.hi}, 
distinct_count: ${bin.ndv}"))
    --- End diff --
    
    yeah, I was wondering if we actually need to see every individual bucket 
instead of a summary (max bucket: 2.0->4.0 w/128 entries, min bucket: 6.0->8.0 
w/0 entries, median bucket size: 32).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to