Hi there,

I would like to check with you whether there is any equivalent functions of
Oracle's analytical functions in Spark SQL.

For example, if I have following data set (table T):
*EID|DEPT*
*101|COMP*
*102|COMP*
*103|COMP*
*104|MARK*

In Oracle, I can do something like
*select EID, DEPT, count(1) over (partition by DEPT) CNT from T;*

to get:
*EID|DEPT|CNT*
*101|COMP|3*
*102|COMP|3*
*103|COMP|3*
*104|MARK|1*

Can we do an equivalent query in Spark-SQL? Or what is the best method to
get such results in Spark dataframes?

Thank you,
Gireesh

Reply via email to