Ok, I think all I can comment about this case was already in the previous 
email. Incremental checkpoints are designed with large state in mind and you 
cannot extrapolate this observation to e.g. 1 million keys, so I think 
everything is working just fine.

> Am 01.12.2017 um 18:22 schrieb vijayakumar palaniappan 
> <vijayakuma...@gmail.com>:
> 
> I observed the job for 18 hrs, it went from 118kb to 1.10MB.
> 
> I am using version 1.3.0 flink
> 
> On Fri, Dec 1, 2017 at 11:39 AM, Stefan Richter <s.rich...@data-artisans.com 
> <mailto:s.rich...@data-artisans.com>> wrote:
> Maybe one more question: is the size always increasing, or will it also 
> reduce eventually? Over what period of time did you observe growth? From the 
> way how RocksDB works, it does persist updates in a way that is sometimes 
> closer to a log than in-place updates. So it is perfectly possible that you 
> can observe a growing state for some time. Eventually, if the state reaches a 
> critical mass, RocksDB will consolidate and prune the written state and that 
> is the time when you should also observe a drop in size.
> 
> From what it seems, you use case is working with a very small state, so if 
> this is not just a test you should reconsider if this is the right use-case 
> for a) incremental checkpoints and b) RocksDB at all.
> 
> > Am 01.12.2017 um 16:34 schrieb vijayakumar palaniappan 
> > <vijayakuma...@gmail.com <mailto:vijayakuma...@gmail.com>>:
> >
> > I have simple event time window aggregate count function with incremental 
> > checkpointing enabled. The checkpoint size keeps increasing over a period 
> > of time, even though my input data has a single key and data is flowing at 
> > a constant rate.
> >
> > When i turn off incremental checkpointing, checkpoint size remains constant?
> >
> > Is there are any switches i need to enable or is this a bug?
> >
> > --
> > Thanks,
> > -Vijay
> 
> 
> 
> 
> -- 
> Thanks,
> -Vijay

Reply via email to