Thanks TD, this is what I was looking for. rdd.context.makeRDD worked.
Laeeq
On Friday, March 13, 2015 11:08 PM, Tathagata Das
wrote:
Is the number of top K elements you want to keep small? That is, is K small?
In which case, you can1. either do it in the driver on the array
DSt
Is the number of top K elements you want to keep small? That is, is K
small? In which case, you can
1. either do it in the driver on the array
DStream.foreachRDD ( rdd => {
val topK = rdd.top(K) ;
// use top K
})
2. Or, you can use the topK to create another RDD using sc.makeRDD
DStream.tr
Hi,
Earlier my code was like follwing but slow due to repartition. I want top K of
each window in a stream.
val counts = keyAndValues.map(x =>
math.round(x._3.toDouble)).countByValueAndWindow(Seconds(4), Seconds(4))val
topCounts = counts.repartition(1).map(_.swap).transform(rdd =>
rdd.sortByKey
Hm, aren't you able to use the SparkContext here? DStream operations
happen on the driver. So you can parallelize() the result?
take() won't work as it's not the same as top()
On Fri, Mar 13, 2015 at 11:23 AM, Akhil Das wrote:
> Like this?
>
> dtream.repartition(1).mapPartitions(it => it.take(5)
Hi,
repartition is expensive. Looking for an efficient to do this.
Regards,Laeeq
On Friday, March 13, 2015 12:24 PM, Akhil Das
wrote:
Like this?
dtream.repartition(1).mapPartitions(it => it.take(5))
ThanksBest Regards
On Fri, Mar 13, 2015 at 4:11 PM, Laeeq Ahmed
wrote:
Hi,
I
Like this?
dtream.repartition(1).mapPartitions(it => it.take(5))
Thanks
Best Regards
On Fri, Mar 13, 2015 at 4:11 PM, Laeeq Ahmed
wrote:
> Hi,
>
> I normally use dstream.transform whenever I need to use methods which are
> available in RDD API but not in streaming API. e.g. dstream.transform
Hi,
I normally use dstream.transform whenever I need to use methods which are
available in RDD API but not in streaming API. e.g. dstream.transform(x =>
x.sortByKey(true))
But there are other RDD methods which return types other than RDD. e.g.
dstream.transform(x => x.top(5)) top here returns Ar