You are using the wrong RDD, use the returned RDD as following.

    val repartitionedRDD = results.repartition(20)
    println(repartitionedRDD.partitions.size)

On Sat, Jan 2, 2016 at 10:38 AM, jimitkr <ji...@softpath.net> wrote:

> Hi,
>
> I'm trying to test some custom parallelism and repartitioning in spark.
>
> First, i reduce my RDD (forcing creation of 10 partitions for the same).
>
> I then repartition the data to 20 partitions and print out the number of
> partitions, but i always get 10. Looks like the repartition command is
> getting ignored.
>
> How do i get repartitioning to work? See code below:
>
>   val
> results=input.reduceByKey((x,y)=>x+y,10).persist(StorageLevel.DISK_ONLY)
>     results.repartition(20)
>     println(results.partitions.size)
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-get-repartitioning-to-work-tp25852.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 
Best Regards

Jeff Zhang

Reply via email to