Hi Cam,

AFAIK, that's not an easy thing. Actually it's more like a Rocksdb issue.
There is a document explaining the memory usage of Rocksdb [1]. It might be
helpful.

You could define your own option to tune Rocksdb through
"state.backend.rocksdb.options-factory" [2]. However I would suggest not to
do this unless you are fully experienced of Rocksdb. IMO it's quite
complicated.

Meanwhile I can share a bit experience of this. We have tried to put the
cache and filter into block cache before. It's useful to control the memory
usage. But the performance might be affected at the same time. Anyway you
could try and tune it. Good luck!

1. https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
2.
https://ci.apache.org/projects/flink/flink-docs-master/ops/state/large_state_tuning.html#tuning-rocksdb

Thanks,
Biao /'bɪ.aʊ/



On Thu, Aug 8, 2019 at 11:44 AM Cam Mach <cammac...@gmail.com> wrote:

> Yes, that is correct.
> Cam Mach
> Software Engineer
> E-mail: cammac...@gmail.com
> Tel: 206 972 2768
>
>
>
> On Wed, Aug 7, 2019 at 8:33 PM Biao Liu <mmyy1...@gmail.com> wrote:
>
>> Hi Cam,
>>
>> Do you mean you want to limit the memory usage of RocksDB state backend?
>>
>> Thanks,
>> Biao /'bɪ.aʊ/
>>
>>
>>
>> On Thu, Aug 8, 2019 at 2:23 AM miki haiat <miko5...@gmail.com> wrote:
>>
>>> I think using metrics exporter is the easiest way
>>>
>>> [1]
>>> https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/metrics.html#rocksdb
>>>
>>>
>>> On Wed, Aug 7, 2019, 20:28 Cam Mach <cammac...@gmail.com> wrote:
>>>
>>>> Hello everyone,
>>>>
>>>> What is the most easy and efficiently way to cap RocksDb's memory usage?
>>>>
>>>> Thanks,
>>>> Cam
>>>>
>>>>

Reply via email to