[ https://issues.apache.org/jira/browse/SPARK-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279623#comment-15279623 ]
Apache Spark commented on SPARK-15114: -------------------------------------- User 'dilipbiswal' has created a pull request for this issue: https://github.com/apache/spark/pull/13045 > Column name generated by typed aggregate is super verbose > --------------------------------------------------------- > > Key: SPARK-15114 > URL: https://issues.apache.org/jira/browse/SPARK-15114 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: Yin Huai > Priority: Critical > > {code} > case class Person(name: String, email: String, age: Long) > val ds = spark.read.json("/tmp/person.json").as[Person] > import org.apache.spark.sql.expressions.scala.typed._ > ds.groupByKey(_ => 0).agg(sum(_.age)) > // org.apache.spark.sql.Dataset[(Int, Double)] = [value: int, > typedsumdouble(unresolveddeserializer(newInstance(class Person), age#0L, > email#1, name#2), upcast(value)): double] > ds.groupByKey(_ => 0).agg(sum(_.age)).explain > == Physical Plan == > WholeStageCodegen > : +- TungstenAggregate(key=[value#84], > functions=[(TypedSumDouble($line15.$read$$iw$$iw$Person),mode=Final,isDistinct=false)], > output=[value#84,typedsumdouble(unresolveddeserializer(newInstance(class > $line15.$read$$iw$$iw$Person), age#0L, email#1, name#2), upcast(value))#91]) > : +- INPUT > +- Exchange hashpartitioning(value#84, 200), None > +- WholeStageCodegen > : +- TungstenAggregate(key=[value#84], > functions=[(TypedSumDouble($line15.$read$$iw$$iw$Person),mode=Partial,isDistinct=false)], > output=[value#84,value#97]) > : +- INPUT > +- AppendColumns <function1>, newInstance(class > $line15.$read$$iw$$iw$Person), [input[0, int] AS value#84] > +- WholeStageCodegen > : +- Scan HadoopFiles[age#0L,email#1,name#2] Format: JSON, > PushedFilters: [], ReadSchema: struct<age:bigint,email:string,name:string> > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org