[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956749#comment-16956749
 ] 

shengjk1 edited comment on FLINK-7289 at 10/22/19 7:11 AM:
-----------------------------------------------------------

 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.

 


was (Author: shengjk1):
 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
{code:java}
class BackendOptions implements ConfigurableOptionsFactory {
 
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
      return currentOptions
            .setMaxBackgroundJobs(4)
            .setUseFsync(false)
            .setMaxBackgroundFlushes(3)
            .setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
      return currentOptions
            .setLevelCompactionDynamicLevelBytes(true)
            .setMinWriteBufferNumberToMerge(2)
            .setMaxWriteBufferNumber(5)
            .setOptimizeFiltersForHits(true)
            .setMaxWriteBufferNumberToMaintain(3)
            .setTableFormatConfig(
                  new BlockBasedTableConfig()
                        .setCacheIndexAndFilterBlocks(true)
                        .setCacheIndexAndFilterBlocksWithHighPriority(true)
                        .setBlockCacheSize(256 * 1024 * 1024)n
                        .setBlockSize(4 * 32 * 1024));
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
      return this;
   }
}
{code}
 

> Memory allocation of RocksDB can be problematic in container environments
> -------------------------------------------------------------------------
>
>                 Key: FLINK-7289
>                 URL: https://issues.apache.org/jira/browse/FLINK-7289
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / State Backends
>    Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>            Reporter: Stefan Richter
>            Priority: Major
>             Fix For: 1.10.0
>
>         Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to