:
Is there any equivalent of Oracle's *analytical functions* in Spark SQL.
For example, if I have following data set (say table T):
/EID|DEPT
101|COMP
102|COMP
103|COMP
104|MARK/
In Oracle, I can do something like
/select EID, DEPT, count(1) over (partition by DEPT) CNT from T;/
to get:
/EID
Is there any equivalent of Oracle's *analytical functions* in Spark SQL.
For example, if I have following data set (say table T):
/EID|DEPT
101|COMP
102|COMP
103|COMP
104|MARK/
In Oracle, I can do something like
/select EID, DEPT, count(1) over (partition by DEPT) CNT from T;/
to get:
/EID|DEPT