[ 
https://issues.apache.org/jira/browse/SPARK-18580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16109265#comment-16109265
 ] 

Oz Ben-Ami commented on SPARK-18580:
------------------------------------

+1
We're using our own dynamic allocation to scale with incoming traffic and 
consumer lag, so we really don't want to limit maxRatePerPartition, but we 
don't want the first batch to be unlimited before scaling has kicked in. 
Anything I can do to help this along? Thanks! [~omuravskiy] [~zsxwing]

> Use spark.streaming.backpressure.initialRate in DirectKafkaInputDStream
> -----------------------------------------------------------------------
>
>                 Key: SPARK-18580
>                 URL: https://issues.apache.org/jira/browse/SPARK-18580
>             Project: Spark
>          Issue Type: Improvement
>          Components: DStreams
>    Affects Versions: 2.0.2
>            Reporter: Oleg Muravskiy
>
> Currently the `spark.streaming.kafka.maxRatePerPartition` is used as the 
> initial rate when the backpressure is enabled. This is too exhaustive for the 
> application while it still warms up.
> This is similar to SPARK-11627, applying the solution provided there to 
> DirectKafkaInputDStream.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to