Hi Brant,
Let me partially answer to your concerns: please follow a new open source
project PL/HQL (www.plhql.org) aimed at allowing you to reuse existing
logic and leverage existing skills at some extent, so you do not need to
rewrite everything to Scala/Java and can do this gradually. I hope it
feedback.
Dmitry
On Fri, Feb 13, 2015 at 11:54 AM, Dmitry Tolpeko dmtolp...@gmail.com
wrote:
Hello,
To convert existing Map Reduce jobs to Spark, I need to implement window
functions such as FIRST_VALUE, LEAD, LAG and so on. For example,
FIRST_VALUE function:
Source (1st column is key):
A, A1
Hello,
To convert existing Map Reduce jobs to Spark, I need to implement window
functions such as FIRST_VALUE, LEAD, LAG and so on. For example,
FIRST_VALUE function:
Source (1st column is key):
A, A1
A, A2
A, A3
B, B1
B, B2
C, C1
and the result should be
A, A1, A1
A, A2, A1
A, A3, A1
B, B1,