.map { case (key,value) =>
>>>>
>>>> (key,value.toArray.toSeq.sliding(2,1).map(x
>>>>
>>>> => x.sum/x.size))}.foreach(println)
>>>>
>>>>
>>>>
>>>> On Sun, Jul 31, 2016 at 12:03 AM, sri
>
>>> <kali.tumm...@gmail.com> wrote:
>>>
>>>
>>> Hi All,
>>>
>>>
>>> I managed to write using sliding function but can it get key as well in
>>>
>>> my
>>>
>>> output ?
>>>
>>
50.0
>>
>> -100.0
>>
>>
>> I want with key how to get moving average output based on key ?
>>
>>
>>
>> 987,75.0
>>
>> 987,-25
>>
>> 987,5
oDouble)).toArray().sliding(2,1).map(x =>
>>>>>>> (x,x.size)).foreach(println)
>>>>>>>
>>>>>>>
>>>>>>> at the moment my output:-
>>>>>>>
>>>>>>> 75.0
>>>>>>> -25.0
>
30, 2016 at 11:24 AM, Jacek Laskowski <ja...@japila.pl>
>
> wrote:
>
>
> Why?
>
>
> Pozdrawiam,
>
> Jacek Laskowski
>
>
>
> https://medium.com/@jaceklaskowski/
>
> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>
> Fol
t;>> Sri
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Jul 30, 2016 at 11:40 AM, sri hari kali charan Tummala
>>>>> <kali.tumm...@gmail.com> wrote:
>>
>>
>> >>>> Why?
>> >>>>
>> >>>> Pozdrawiam,
>> >>>> Jacek Laskowski
>> >>>>
>> >>>> https://medium.com/@jaceklaskowski/
>> >>>> Mastering Apache Spark 2.0
key ?
>>>> >>
>>>> >>
>>>> >> 987,75.0
>>>> >> 987,-25
>>>> >> 987,50.0
>>>> >>
>>>> >>
>>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On Sat, Jul 30, 2016 at 11:40 AM, sri hari kali charan Tummala
>>> >> <kali.tumm...@gmail.com> wrote:
>>> >>>
>>> >>> for know
>> >>> Sri
>> >>>
>> >>> On Sat, Jul 30, 2016 at 11:24 AM, Jacek Laskowski <ja...@japila.pl>
>> >>> wrote:
>> >>>>
>> >>>> Why?
>> >>>>
>> >>>&g
>>
> >>>> Pozdrawiam,
> >>>> Jacek Laskowski
> >>>>
> >>>> https://medium.com/@jaceklaskowski/
> >>>> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
> >>>> Follow me
gt;>>> Follow me at https://twitter.com/jaceklaskowski
>>>>
>>>>
>>>> On Sat, Jul 30, 2016 at 4:42 AM, kali.tumm...@gmail.com
>>>> <kali.tumm...@gmail.com> wrote:
>>>> > Hi All,
>>
I am
>>> still
>>> > learning scala how this below sql be written using spark RDD not spark
>>> data
>>> > frames.
>>> >
>>> > SELECT DATE,balance,
>>> > SUM(balance) OVER (ORDER BY DATE ROWS BETWEEN UNBOUNDED PRECEDING AND
>>
BY DATE ROWS BETWEEN UNBOUNDED PRECEDING AND
>> > CURRENT ROW) daily_balance
>> > FROM table
>> >
>> >
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com
table
> >
> >
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/sql-to-spark-scala-rdd-tp27433.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >
ROW) daily_balance
> FROM table
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/sql-to-spark-scala-rdd-tp27433.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/sql-to-spark-scala-rdd-tp27433.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe e-mail: user
17 matches
Mail list logo