I definitely agree that dynamic allocation is useful, that's why I asked the question :p
More specifically, does spark plan to solve the problems with DRA for structured streaming mentioned in that Cloudera article? If folks can give me pointers on where to start, I'd be happy to implement something similar to what spark streaming did. ________________________________ From: cbowden <cbcweb...@gmail.com> Sent: Thursday, August 24, 2017 7:01 PM To: user@spark.apache.org Subject: Re: [Streaming][Structured Streaming] Understanding dynamic allocation in streaming jobs You can leverage dynamic resource allocation with structured streaming. Certainly there's an argument trivial jobs won't benefit. Certainly there's an argument important jobs should have fixed resources for stable end to end latency. Few scenarios come to mind with benefits: - I want my application to automatically leverage more resources if my environment changes, eg. kafka topic partitions were increased at runtime - I am not building a toy application and my driver is managing many streaming queries with fair scheduling enabled where not every streaming query has strict latency requirements - My source's underlying rdd representing the dataframe provided by getbatch is volatile, eg. #partitions batch to batch -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Streaming-Structured-Streaming-Understanding-dynamic-allocation-in-streaming-jobs-tp29091p29104.html Apache Spark User List - [Streaming][Structured Streaming] Understanding dynamic allocation in streaming jobs<http://apache-spark-user-list.1001560.n3.nabble.com/Streaming-Structured-Streaming-Understanding-dynamic-allocation-in-streaming-jobs-tp29091p29104.html> apache-spark-user-list.1001560.n3.nabble.com [Streaming][Structured Streaming] Understanding dynamic allocation in streaming jobs. I'm trying to understand dynamic allocation in Spark Streaming and Structured Streaming. It seems if you... Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org