[ 
https://issues.apache.org/jira/browse/SPARK-10741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14905327#comment-14905327
 ] 

Ian edited comment on SPARK-10741 at 9/23/15 9:36 PM:
------------------------------------------------------

Yes, going through all rules when resolve Sort on Aggregate is a correct 
approach.

The main problem appeared that the execute call at 
(https://github.com/apache/spark/blob/v1.5.0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala#L571)
 is resolving to different attribute ids, and causing confusion at  
https://github.com/apache/spark/blob/v1.5.0/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala#L592-L611.

just for me to understand a bit more:
the second approach you are proposing is to remove the confusion by changing 
how ids are resolved in Analyzer.scala#L571, right? 





was (Author: ianlcsd):

Yes, going through all rules when resolve Sort on Aggregate is a correct 
approach.

The main problem appeared that the execute call at 
(hhttps://github.com/apache/spark/blob/v1.5.0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala#L571)
 is resolving to different attribute ids, and causing confusion at  
https://github.com/apache/spark/blob/v1.5.0/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala#L592-L611.

just for me to understand a bit more:
the second approach you are proposing is to remove the confusion by changing 
how ids are resolved in Analyzer.scala#L571, right? 




> Hive Query Having/OrderBy against Parquet table is not working 
> ---------------------------------------------------------------
>
>                 Key: SPARK-10741
>                 URL: https://issues.apache.org/jira/browse/SPARK-10741
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.0
>            Reporter: Ian
>
> Failed Query with Having Clause
> {code}
>   def testParquetHaving() {
>     val ddl =
>       """CREATE TABLE IF NOT EXISTS test ( c1 string, c2 int ) STORED AS 
> PARQUET"""
>     val failedHaving =
>       """ SELECT c1, avg ( c2 ) as c_avg
>         | FROM test
>         | GROUP BY c1
>         | HAVING ( avg ( c2 ) > 5)  ORDER BY c1""".stripMargin
>     TestHive.sql(ddl)
>     TestHive.sql(failedHaving).collect
>   }
> org.apache.spark.sql.AnalysisException: resolved attribute(s) c2#16 missing 
> from c1#17,c2#18 in operator !Aggregate [c1#17], [cast((avg(cast(c2#16 as 
> bigint)) > cast(5 as double)) as boolean) AS 
> havingCondition#12,c1#17,avg(cast(c2#18 as bigint)) AS c_avg#9];
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:37)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:44)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:154)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:49)
> {code}
> Failed Query with OrderBy
> {code}
>   def testParquetOrderBy() {
>     val ddl =
>       """CREATE TABLE IF NOT EXISTS test ( c1 string, c2 int ) STORED AS 
> PARQUET"""
>     val failedOrderBy =
>       """ SELECT c1, avg ( c2 ) c_avg
>         | FROM test
>         | GROUP BY c1
>         | ORDER BY avg ( c2 )""".stripMargin
>     TestHive.sql(ddl)
>     TestHive.sql(failedOrderBy).collect
>   }
> org.apache.spark.sql.AnalysisException: resolved attribute(s) c2#33 missing 
> from c1#34,c2#35 in operator !Aggregate [c1#34], [avg(cast(c2#33 as bigint)) 
> AS aggOrder#31,c1#34,avg(cast(c2#35 as bigint)) AS c_avg#28];
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:37)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:44)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to