Should be part of Spark 1.4 https://issues.apache.org/jira/browse/SPARK-1442
I don't see it in the documentation though https://spark.apache.org/docs/latest/sql-programming-guide.html -- Ruslan Dautkhanov On Mon, Jul 6, 2015 at 5:06 AM, gireeshp <gireesh.puthum...@augmentiq.in> wrote: > Is there any equivalent of Oracle's *analytical functions* in Spark SQL. > > For example, if I have following data set (say table T): > /EID|DEPT > 101|COMP > 102|COMP > 103|COMP > 104|MARK/ > > In Oracle, I can do something like > /select EID, DEPT, count(1) over (partition by DEPT) CNT from T;/ > > to get: > /EID|DEPT|CNT > 101|COMP|3 > 102|COMP|3 > 103|COMP|3 > 104|MARK|1/ > > Can we do an equivalent query in Spark-SQL? Or what is the best method to > get such results in Spark dataframes? > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Spark-equivalent-for-Oracle-s-analytical-functions-tp23646.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >