Re: What does "Continuous incremental cleanup" mean in Flink 1.8 release notes

2019-03-10 Thread Tony Wei
Hi Konstantin, That is really helpful. Thanks. Another follow-up question: The document said "Cleanup in full snapshot" is not applicable for the incremental checkpointing in the RocksDB state backend. However, when user manually trigger a savepoint and restart job from it, the expired states

Re: estimate number of keys on rocks db

2019-03-10 Thread Avi Levi
Thanks Yun, Attached. Please let me know if it is ok. I made several trials including aggregation functions but couldn't figure out why the line is not going straight up and why having those picks . On Sun, Mar 10, 2019 at 4:49 PM Yun Tang wrote: > Hi Avi > > Unfortunately, we cannot see the

Re: estimate number of keys on rocks db

2019-03-10 Thread Yun Tang
Hi Avi Unfortunately, we cannot see the attached images. By the way, did you ever use window in this job? Best Yun Tang From: Avi Levi Sent: Sunday, March 10, 2019 19:41 To: user Subject: estimate number of keys on rocks db Hi, I am trying to estimate number

estimate number of keys on rocks db

2019-03-10 Thread Avi Levi
Hi, I am trying to estimate number of keys at a given minute. I created a graph based on avg_over_time with 1hr and 5m interval. looking at the graph you can see that it has high spikes which doesn't make

Re: S3 parquet sink - failed with S3 connection exception

2019-03-10 Thread Averell
Hi Kostas, and everyone, Just some update to my issue: I have tried to: * changed s3 related configuration in hadoop as suggested by hadoop document [1]: increased /fs.s3a.threads.max/ from 10 to 100, and /fs.s3a.connection.maximum/ from 15 to 120. For reference, I am having only 3 S3 sinks,

Re: Backoff strategies for async IO functions?

2019-03-10 Thread Shuyi Chen
Hi Konstantin, (cc Till since he owns the code) For async-IO, IO failure and retry is a common & expected pattern. In most of the use cases, users will need to deal with IO failure and retry. Therefore, I think it's better to address the problem in Flink rather than user implementing its custom