Thanks Roman for your response. Mans
On Wednesday, June 17, 2020, 03:26:31 AM EDT, Roman Grebennikov
wrote:
#yiv4075825537 p.yiv4075825537MsoNormal, #yiv4075825537
p.yiv4075825537MsoNoSpacing{margin:0;}Hi,
It will occur if your job will reach SHARD_GETRECORDS_RETRIES consecutive
fai
Hi,
It will occur if your job will reach SHARD_GETRECORDS_RETRIES consecutive
failed attempts to pull the data from kinesis.
So if you scale up the topic in kinesis and tune a bit backoff parameters, you
will lower the probability of this exception almost to zero (but with increased
costs and w
Thanks Roman for your response and advice.
>From my understanding increasing shards will increase throughput but still if
>more than 5 requests are made per shard/per second, and since we have 20 apps
>(and increasing) then the exception might occur.
Please let me know if I have missed anythin
Hi,
usually this exception is thrown by aws-java-sdk and means that your kinesis
stream is hitting a throughput limit (what a surprise). We experienced the same
thing when we had a single "event-bus" style stream and multiple flink apps
reading from it.
Each Kinesis partition has a limit of 5
Hi:
I am using multiple (almost 30 and growing) Flink streaming applications that
read from the same kinesis stream and get
ProvisionedThroughputExceededException exception which fails the job.
I have seen a reference
http://mail-archives.apache.org/mod_mbox/flink-user/201811.mbox/%3CCAJnSTVxpu