Hi

When I use a JavaPairRDD's groupByKey(), reduceByKey(), or sortByKey(), is 
there a way for me to specify the number of reduce tasks, as there is in a 
scala RDD? Or do I have to set them all to use spark.default.parallelism?

Thanks,

-Matt Cheah

(feels like I've been asking a lot of questions as of lateā€¦)

Reply via email to