Hi!

JobListener#onJobExecuted might help, if your job is not a forever-running
streaming job. See
https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/core/execution/JobListener.html

Samir Vasani <samirvas...@gmail.com> 于2021年7月23日周五 下午3:22写道:

> Hi,
>
> I am a new bee to flink and facing some challenges to solve below use case
>
> Use Case description:
>
> I will receive a csv file with a timestamp on every single day in some
> folder say *input*.The file format would be
> *file_name_dd-mm-yy-hh-mm-ss.csv*.
>
> Now my flink pipeline will read this csv file in a row by row fashion and
> it will be written to my Kafka topic.
>
> Once the pipeline reads the entire file then this file needs to be moved
> to another folder say *historic* so that i can keep *input * folder empty
> for the new file.
>
> I googled a lot but did not find anything so can you guide me to achieve
> this.
>
> Let me know if anything else is required.
>
>
> Samir Vasani
>

Reply via email to