Re: Last batch of stream data could not be sinked when data comes very slow

2018-11-17 Thread 徐涛
Hi Gordon, Later I found the implementation of the Elastic Search Sink provided by Flink, and I found it also use the mechanism to flush the data when checkpoints happens. I apply the method, now the problem is solved. It uses exactly the method you have provided. Thanks a lot for your

Re: Last batch of stream data could not be sinked when data comes very slow

2018-11-14 Thread Tzu-Li (Gordon) Tai
Hi Henry, Flushing of buffered data in sinks should occur on two occasions - 1) when some buffer size limit is reached or a fixed-flush interval is fired, and 2) on checkpoints. Flushing any pending data before completing a checkpoint ensures the sink has at-least-once guarantees, so that

Last batch of stream data could not be sinked when data comes very slow

2018-11-13 Thread 徐涛
Hi Experts, When we implement a sink, usually we implement a batch, according to the record number or when reaching a time interval, however this may lead to data of last batch do not write to sink. Because it is triggered by the incoming record. I also test the JDBCOutputFormat