I have watched a few videos from Databricks/Andrew Or around the Spark 1.2
release and it seemed that dynamic allocation was not yet available for
Spark Streaming.

I now see SPARK-10955 <https://issues.apache.org/jira/browse/SPARK-10955> which
is tied to 1.5.2 and allows disabling of Spark Streaming with dynamic
allocation.

I use Spark Streaming with a receiverless/direct Kafka connection.  When I
start up an app reading from the beginning of the topic I would like to
have more resources than once I have caught up.  Is it possible to use
dynamic allocation for this use case?

thanks,
Robert

Reply via email to