You would probably need dynamic allocation which is only available on yarn
and mesos. Or wait for on going spark k8s integration


Le 15 mars 2017 1:54 AM, "Pranav Shukla" <pranav.shu...@brevitaz.com> a
écrit :

> How to scale or possibly auto-scale a spark streaming application
> consuming from kafka and using kafka direct streams. We are using spark
> 1.6.3, cannot move to 2.x unless there is a strong reason.
>
> Scenario:
> Kafka topic with 10 partitions
> Standalone cluster running on kubernetes with 1 master and 2 workers
>
> What we would like to know?
> Increase the number of partitions (say from 10 to 15)
> Add additional worker node without restarting the streaming application
> and start consuming off the additional partitions.
>
> Is this possible? i.e. start additional workers in standalone cluster to
> auto-scale an existing spark streaming application that is already running
> or we have to stop and resubmit the streaming app?
>
> Best Regards,
> Pranav Shukla
>

Reply via email to