[ 
https://issues.apache.org/jira/browse/FLINK-32446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17737959#comment-17737959
 ] 

Leonard Xu edited comment on FLINK-32446 at 6/28/23 4:08 AM:
-------------------------------------------------------------

Fixed in flink-connector-mongodb(main): 49b7550fbc0285e1c605c5b0efdae762cd9b144b


was (Author: leonard xu):
Fixed in master: 49b7550fbc0285e1c605c5b0efdae762cd9b144b

> MongoWriter should regularly check whether the last write time is greater 
> than the specified time.
> --------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-32446
>                 URL: https://issues.apache.org/jira/browse/FLINK-32446
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / MongoDB
>    Affects Versions: mongodb-1.0.0, mongodb-1.0.1
>            Reporter: Jiabao Sun
>            Assignee: Jiabao Sun
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: mongodb-1.1.0
>
>
> Mongo sink waits for new record to write previous records. I have a 
> upsert-kafka topic filled that has already some events. I start a new 
> upsert-kafka to mongo db sink job. I expect all the data from the topic to be 
> loaded to mongodb right away. But instead, only the first record is written 
> to mongo db. The rest of the records don’t arrive in mongodb until a new 
> event is written to kafka topic. The new event that was written is delayed 
> until the next event arrives. 
> To prevent this problem, the MongoWriter should regularly check whether the 
> last write time is greater than the specified time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to