If people agree that is desired, I am willing to submit a SIP for this and
find time to work on it.

Thanks,
Aniket

On Sun, Nov 6, 2016 at 1:06 PM Aniket Bhatnagar <aniket.bhatna...@gmail.com>
wrote:

> Hello
>
> Dynamic allocation feature allows you to add executors and scale
> computation power. This is great, however, I feel like we also need a way
> to dynamically scale storage. Currently, if the disk is not able to hold
> the spilled/shuffle data, the job is aborted causing frustration and loss
> of time. In deployments like AWS EMR, it is possible to run an agent that
> add disks on the fly if it sees that the disks are running out of space and
> it would be great if Spark could immediately start using the added disks
> just as it does when new executors are added.
>
> Thanks,
> Aniket
>

Reply via email to