[ 
https://issues.apache.org/jira/browse/FLINK-21321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17281075#comment-17281075
 ] 

Jiayi Liao commented on FLINK-21321:
------------------------------------

[~legojoey17] We've met the same problem recently. The `deleteRange` is our 
first thought, but we decide to give it up when we noticed the feature is 
experimental. To avoid the recovery time cost on big state job, we suggest our 
users to increase the number of write buffer counts and flush thread counts as 
the temporary solution. 

 

Thanks for creating this issue and point out that the feature has been used 
widely. I'm +1 on this improvement.

> Change RocksDB incremental checkpoint re-scaling to use deleteRange
> -------------------------------------------------------------------
>
>                 Key: FLINK-21321
>                 URL: https://issues.apache.org/jira/browse/FLINK-21321
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / State Backends
>            Reporter: Joey Pereira
>            Priority: Minor
>
> In FLINK-8790, it was suggested to use RocksDB's {{deleteRange}} API to more 
> efficiently clip the databases for the desired target group.
> During the PR for that ticket, 
> [#5582|https://github.com/apache/flink/pull/5582], the change did not end up 
> using the {{deleteRange}} method  as it was an experimental feature in 
> RocksDB.
> At this point {{deleteRange}} is in a far less experimental state now but I 
> believe is still formally "experimental". It is heavily by many others like 
> CockroachDB and TiKV and they have teased out several bugs in complex 
> interactions over the years.
> For certain re-scaling situations where restores trigger 
> {{restoreWithScaling}} and the DB clipping logic, this would likely reduce an 
> O[n] operation (N = state size/records) to O(1). For large state apps, this 
> would potentially represent a non-trivial amount of time spent for 
> re-scaling. In the case of my workplace, we have an operator with 100s of 
> billions of records in state and re-scaling was taking a long time (>>30min, 
> but it has been awhile since doing it).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to