Does Flink SQL support stop with savepoint draining?

2019-12-31 Thread Yuval Itzchakov
Hi, I tried running a stop with savepoint on my Flink job, which includes numerous Flink SQL streams. While running the command I used `-d` to drain all streams with MAX_WATERMARK. Looking at the Flink UI, all sources successfully finished, yet all Flink SQL streams were in a Running state, and n

How to get kafka record's timestamp in job

2019-12-31 Thread 刘建刚
In kafka010, ConsumerRecord has a field named timestamp. It is encapsulated in Kafka010Fetcher. How can I get the timestamp when I write a flink job? Thank you very much.

Re: How to get kafka record's timestamp in job

2019-12-31 Thread David Anderson
> In kafka010, ConsumerRecord has a field named timestamp. It is encapsulated in Kafka010Fetcher. > How can I get the timestamp when I write a flink job? Kafka010Fetcher puts the timestamps into the StreamRecords that wrap your events. If you want to access these timestamps, you can use a ProcessF

Best way set max heap size via env variables or program arguments?

2019-12-31 Thread Li Peng
Hey folks, we've been running a k8 flink application, using the taskmanager.sh script and passing in the -Djobmanager.heap.size=9000m and -Dtaskmanager.heap.size=7000m as options to the script. I noticed from the logs, that the Maximum heap size logged completely ignores these arguments, and just s

Re: Duplicate tasks for the same query

2019-12-31 Thread RKandoji
Thanks Jingsong and Kurt for more details. Yes, I'm planning to try out DeDuplication when I'm done upgrading to version 1.9. Hopefully deduplication is done by only one task and reused everywhere else. One more follow-up question, I see "For production use cases, we recommend the old planner tha