[ https://issues.apache.org/jira/browse/SPARK-18620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16470069#comment-16470069 ]
bruce_zhao commented on SPARK-18620: ------------------------------------ This PR makes the input rate flat, but the limit of get-records is still out of control. In the onStart(), it will initialize a configuration for KCL worker. val baseClientLibConfiguration = new KinesisClientLibConfiguration( checkpointAppName, streamName, kinesisProvider, dynamoDBCreds.map(_.provider).getOrElse(kinesisProvider), cloudWatchCreds.map(_.provider).getOrElse(kinesisProvider), workerId) In the KCL library, it will use default value DEFAULT_MAX_RECORDS(10,000) for getRecords. As Kinesis only supports a data read rate of 2 MB per second per shard, it will be easily to get exception ProvisionedThroughputExceededException, especially when we restart the application after a long stop. > Spark Streaming + Kinesis : Receiver MaxRate is violated > -------------------------------------------------------- > > Key: SPARK-18620 > URL: https://issues.apache.org/jira/browse/SPARK-18620 > Project: Spark > Issue Type: Bug > Components: DStreams > Affects Versions: 2.0.2 > Reporter: david przybill > Assignee: Takeshi Yamamuro > Priority: Minor > Labels: kinesis > Fix For: 2.2.0 > > Attachments: Apply_limit in_spark_with_my_patch.png, Apply_limit > in_vanilla_spark.png, Apply_no_limit.png > > > I am calling spark-submit passing maxRate, I have a single kinesis receiver, > and batches of 1s > spark-submit --conf spark.streaming.receiver.maxRate=10 .... > however a single batch can greatly exceed the stablished maxRate. i.e: Im > getting 300 records. > it looks like Kinesis is completely ignoring the > spark.streaming.receiver.maxRate configuration. > If you look inside KinesisReceiver.onStart, you see: > val kinesisClientLibConfiguration = > new KinesisClientLibConfiguration(checkpointAppName, streamName, > awsCredProvider, workerId) > .withKinesisEndpoint(endpointUrl) > .withInitialPositionInStream(initialPositionInStream) > .withTaskBackoffTimeMillis(500) > .withRegionName(regionName) > This constructor ends up calling another constructor which has a lot of > default values for the configuration. One of those values is > DEFAULT_MAX_RECORDS which is constantly set to 10,000 records. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org