Generally speaking this should not be a problem for exactly-once but I am
not familiar with the DynamoDB and its Flink connector.
Did you observe any failover in Flink logs?

On Thu, Sep 10, 2020 at 4:34 PM Jiawei Wu <wujiawei5837...@gmail.com> wrote:

> And I suspect I have throttled by DynamoDB stream, I contacted AWS support
> but got no response except for increasing WCU and RCU.
>
> Is it possible that Flink will lose exactly-once semantics when throttled?
>
> On Thu, Sep 10, 2020 at 10:31 PM Jiawei Wu <wujiawei5837...@gmail.com>
> wrote:
>
>> Hi Andrey,
>>
>> Thanks for your suggestion, but I'm using Kinesis analytics application
>> which supports only Flink 1.8....
>>
>> Regards,
>> Jiawei
>>
>> On Thu, Sep 10, 2020 at 10:13 PM Andrey Zagrebin <azagre...@apache.org>
>> wrote:
>>
>>> Hi Jiawei,
>>>
>>> Could you try Flink latest release 1.11?
>>> 1.8 will probably not get bugfix releases.
>>> I will cc Ying Xu who might have a better idea about the DinamoDB source.
>>>
>>> Best,
>>> Andrey
>>>
>>> On Thu, Sep 10, 2020 at 3:10 PM Jiawei Wu <wujiawei5837...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm using AWS kinesis analytics application with Flink 1.8. I am using
>>>> the FlinkDynamoDBStreamsConsumer to consume DynamoDB stream records. But
>>>> recently I found my internal state is wrong.
>>>>
>>>> After I printed some logs I found some DynamoDB stream record are
>>>> skipped and not consumed by Flink. May I know if someone encountered the
>>>> same issue before? Or is it a known issue in Flink 1.8?
>>>>
>>>> Thanks,
>>>> Jiawei
>>>>
>>>

Reply via email to