[ https://issues.apache.org/jira/browse/SPARK-18620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15711171#comment-15711171 ]
Takeshi Yamamuro commented on SPARK-18620: ------------------------------------------ I quickly checked and I found that that's not enough to set max records in Kinesis workers because the kinesis workers cannot limit the number of aggregate messages (http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-concepts.html#d0e5184). For example, if we set 10 to the number of max records in workers and a producer aggregates two records into one message, it seems kinesis workers actually 20 records per callback function called. My hunch is that we need to control #records to push them into a receiver in KinesisRecordProcessor#processRecords(https://github.com/apache/spark/blob/master/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisRecordProcessor.scala#L68). > Spark Streaming + Kinesis : Receiver MaxRate is violated > -------------------------------------------------------- > > Key: SPARK-18620 > URL: https://issues.apache.org/jira/browse/SPARK-18620 > Project: Spark > Issue Type: Bug > Components: DStreams > Affects Versions: 2.0.2 > Reporter: david przybill > Priority: Minor > Labels: kinesis > > I am calling spark-submit passing maxRate, I have a single kinesis receiver, > and batches of 1s > spark-submit --conf spark.streaming.receiver.maxRate=10 .... > however a single batch can greatly exceed the stablished maxRate. i.e: Im > getting 300 records. > it looks like Kinesis is completely ignoring the > spark.streaming.receiver.maxRate configuration. > If you look inside KinesisReceiver.onStart, you see: > val kinesisClientLibConfiguration = > new KinesisClientLibConfiguration(checkpointAppName, streamName, > awsCredProvider, workerId) > .withKinesisEndpoint(endpointUrl) > .withInitialPositionInStream(initialPositionInStream) > .withTaskBackoffTimeMillis(500) > .withRegionName(regionName) > This constructor ends up calling another constructor which has a lot of > default values for the configuration. One of those values is > DEFAULT_MAX_RECORDS which is constantly set to 10,000 records. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org