Why change the number of partitions of RDDs? especially since you
can't generally do that without a shuffle. If you just mean to ramp up
and down resource usage, dynamic allocation (of executors) already
does that.
On Wed, Sep 30, 2015 at 10:49 PM, Muhammed Uluyol wrote:
>
Hello,
How feasible would it be to have spark speculatively increase the number of
partitions when there is spare capacity in the system? We want to do this
to increase to decrease application runtime. Initially, we will assume that
function calls of the same type will have the same runtime (e.g.