, etc) to execute in a single stage.
Hope this helps,
-adrian
From: Tathagata Das
Date: Wednesday, October 21, 2015 at 10:36 AM
To: swetha
Cc: user
Subject: Re: Job splling to disk and memory in Spark Streaming
Well, reduceByKey needs to shutffle if your intermediate data is not already
partitioned
t gets
> shuffled.
>
> How to use custom partitioner in Spark Streaming for an intermediate stage
> so that the next stage that uses reduceByKey does not have to do shuffles?
>
> Thanks,
> Swetha
>
>
>
> --
> View this message in context:
> http://apache-spark
do shuffles?
Thanks,
Swetha
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Job-splling-to-disk-and-memory-in-Spark-Streaming-tp25149.html
Sent from the Apache Spark User List mailing list archive at Nabble.com