[ https://issues.apache.org/jira/browse/BEAM-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17071120#comment-17071120 ]
Alexey Romanenko commented on BEAM-9439: ---------------------------------------- [~titmus] Thank for testing this. I'm not a Dataflow expert but its [autoscaling documentation|https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#autoscaling] recommends to use total backlog bytes only in case of implementing {{getTotalBacklogBytes()}} and I'm not sure that it will work properly if every shard reader will return the total bytes (if I got it right your current fix). For example, KafkaIO implements {{getSplitBacklogBytes()}} by summing up all backlog byte sizes for every shard assigned to the reader. It does it approximately by taking average number of records read between two offsets and multiplied by average record size. > KinesisReader does not report correct backlog statistics > --------------------------------------------------------- > > Key: BEAM-9439 > URL: https://issues.apache.org/jira/browse/BEAM-9439 > Project: Beam > Issue Type: Bug > Components: io-java-kinesis > Reporter: Sam Whittle > Priority: Major > > The KinesisReader implementing KinesisIO reports backlog by implementing the > UnboundedSource.getTotalBacklogBytes() > method as opposed to the > UnboundedSource.getSplitBacklogBytes() > This value is supposed to represent the total backlog across all shards. > This function is implemented by calling > SimplifiedKinesisClient.getBacklogBytes with the watermark of the kinesis > shards managed within the UnboundedReader instance. As this watermark may be > further ahead than the watermark across all shards, this may miss backlog > bytes. > An additional concern is that the watermark is calculated using a > WatermarkPolicy, which means that the watermark may be inconsistent to the > kinesis timestamp for querying backlog. -- This message was sent by Atlassian Jira (v8.3.4#803005)