eCombiners)
> .map{ *case *(name, mapOfYearsToValues) => (*Seq*(name) ++
> sequenceOfYears.map(year => mapOfYearsToValues.getOrElse(year, *" "*
> ))).mkString(*","*)}* // here we assume that sequence of all years isn’t
> too big to not fit in memory. If
-with-spark-ml-pipelines.html
-adrian
From: Deng Ching-Mallete
Date: Friday, October 30, 2015 at 4:35 AM
To: Ascot Moss
Cc: User
Subject: Re: Pivot Data in Spark and Scala
Hi,
You could transform it into a pair RDD then use the combineByKey function.
HTH,
Deng
On Thu, Oct 29, 2015 at 7:29 PM, Ascot
d you
would definitely need to use a specialized timeseries library…
result.foreach(println)
sc.stop()
Best regards,
Fanilo
De : Adrian Tanase [mailto:atan...@adobe.com]
Envoyé : vendredi 30 octobre 2015 11:50
À : Deng Ching-Mallete; Ascot Moss
Cc : User
Objet : Re: Pivot Data in Spark and S
emory. If you had to compute for each day, it may break
> and you would definitely need to use a specialized timeseries library…
>
> result.foreach(println)
>
> sc.stop()
>
> Best regards,
> Fanilo
>
> De : Adrian Tanase [mailto:atan...@adobe.com]
> Envoyé : v
https://issues.apache.org/jira/browse/SPARK-8992
Should be in 1.6?
--
Ruslan Dautkhanov
On Thu, Oct 29, 2015 at 5:29 AM, Ascot Moss wrote:
> Hi,
>
> I have data as follows:
>
> A, 2015, 4
> A, 2014, 12
> A, 2013, 1
> B, 2015, 24
> B, 2013 4
>
>
> I need to convert the
Hi,
I have data as follows:
A, 2015, 4
A, 2014, 12
A, 2013, 1
B, 2015, 24
B, 2013 4
I need to convert the data to a new format:
A ,4,12,1
B, 24,,4
Any idea how to make it in Spark Scala?
Thanks
Hi,
You could transform it into a pair RDD then use the combineByKey function.
HTH,
Deng
On Thu, Oct 29, 2015 at 7:29 PM, Ascot Moss wrote:
> Hi,
>
> I have data as follows:
>
> A, 2015, 4
> A, 2014, 12
> A, 2013, 1
> B, 2015, 24
> B, 2013 4
>
>
> I need to convert the