[ 
https://issues.apache.org/jira/browse/SPARK-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14992633#comment-14992633
 ] 

Utkarsh Sengar edited comment on SPARK-9435 at 11/5/15 11:17 PM:
-----------------------------------------------------------------

I am running into this problem too. 

Query:
sqlContext.sql("SELECT properties.time, count(listToSingleId(properties.ids)) 
FROM allEvents WHERE event='abc' AND isSingleSearch(properties.ids) GROUP BY 
listToSingleId(properties.ids)")

UDF:
sqlContext.udf.register("listToSingleId", (ids : List[Long]) => ids(0))

Trying the nested select now, but is this a bug or expected outcome when using 
UDF with group by?




was (Author: zengr):
I am running into this problem too. 

Query:
sqlContext.sql("SELECT properties.time FROM allEvents WHERE event='abc' AND 
isSingleSearch(properties.ids) GROUP BY listToSingleId(properties.ids)")

UDF:
sqlContext.udf.register("listToSingleId", (ids : List[Long]) => ids(0))

Trying the nested select now, but is this a bug or expected outcome when using 
UDF with group by?



> Java UDFs don't work with GROUP BY expressions
> ----------------------------------------------
>
>                 Key: SPARK-9435
>                 URL: https://issues.apache.org/jira/browse/SPARK-9435
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>         Environment: All
>            Reporter: James Aley
>         Attachments: IncMain.java, points.txt
>
>
> If you define a UDF in Java, for example by implementing the UDF1 interface, 
> then try to use that UDF on a column in both the SELECT and GROUP BY clauses 
> of a query, you'll get an error like this:
> {code}
> "SELECT inc(y),COUNT(DISTINCT x) FROM test_table GROUP BY inc(y)"
> org.apache.spark.sql.AnalysisException: expression 'y' is neither present in 
> the group by, nor is it an aggregate function. Add to group by or wrap in 
> first() if you don't care which value you get.
> {code}
> We put together a minimal reproduction in the attached Java file, which makes 
> use of the data in the text file attached.
> I'm guessing there's some kind of issue with the equality implementation, so 
> Spark can't tell that those two expressions are the same maybe? If you do the 
> same thing from Scala, it works fine.
> Note for context: we ran into this issue while working around SPARK-9338.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to