Hi Xiaogang,

Yes I have set that, I got the same issue. I don't see the graph coming
down. Also I checked the HDFS usage  , only 3GB is being used, that means
nothing is getting flushed to disk.

I think the parameters are not getting set properly. I am using FRocksDB ,
is it causing this error ?


Regards,
Vinay Patil

On Thu, Jun 29, 2017 at 7:30 AM, SHI Xiaogang <shixiaoga...@gmail.com>
wrote:

> Hi Vinay,
>
> We observed a similar problem before. We found that RocksDB keeps a lot of
> index and filter blocks in memory. With the growth in state size (in our
> cases, most states are only cleared in windowed streams), these blocks will
> occupy much more memory.
>
> We now let RocksDB put these blocks into block cache (via
> setCacheIndexAndFilterBlocks), and limit the memory usage of RocksDB with
> block cache size. Performance may be degraded, but TMs can avoid being
> killed by YARN for overused memory.
>
> This may not be the same cause of your problem, but it may be helpful.
>
> Regards,
> Xiaogang
>
>
>
>
>
>
> 2017-06-28 23:26 GMT+08:00 Vinay Patil <vinay18.pa...@gmail.com>:
>
>> Hi Aljoscha,
>>
>> I am using event Time based tumbling window wherein the allowedLateness
>> is kept to Long.MAX_VALUE and I have custom trigger which is similar to
>> 1.0.3 where Flink was not discarding late elements (we have discussed this
>> scenario before).
>>
>> The watermark is working correctly because I have validated the records
>> earlier.
>>
>> I was doubtful that the RocksDB statebackend is not set , but in the logs
>> I can clearly see that RocksDB is initialized successfully, so that should
>> not be an issue.
>>
>> Even I have not changed any major  code from the last performance test I
>> had done.
>>
>> The snapshot I had attached is of Off-heap memory, I have only assigned
>> 12GB heap memory per TM
>>
>>
>> Regards,
>> Vinay Patil
>>
>> On Wed, Jun 28, 2017 at 8:43 PM, Aljoscha Krettek <aljos...@apache.org>
>> wrote:
>>
>>> Hi,
>>>
>>> Just a quick question, because I’m not sure whether this came up in the
>>> discussion so far: what kind of windows are you using? Processing
>>> time/event time? Sliding Windows/Tumbling Windows? Allowed lateness? How is
>>> the watermark behaving?
>>>
>>> Also, the latest memory usage graph you sent, is that heap memory or
>>> off-heap memory or both?
>>>
>>> Best,
>>> Aljoscha
>>>
>>> > On 27. Jun 2017, at 11:45, vinay patil <vinay18.pa...@gmail.com>
>>> wrote:
>>> >
>>> > Hi Stephan,
>>> >
>>> > I am observing similar issue with Flink 1.2.1
>>> >
>>> > The memory is continuously increasing and data is not getting flushed
>>> to
>>> > disk.
>>> >
>>> > I have attached the snapshot for reference.
>>> >
>>> > Also the data processed till now is only 17GB and above 120GB memory is
>>> > getting used.
>>> >
>>> > Is there any change wrt RocksDB configurations
>>> >
>>> > <http://apache-flink-user-mailing-list-archive.2336050.n4.na
>>> bble.com/file/n14013/TM_Memory_Usage.png>
>>> >
>>> > Regards,
>>> > Vinay Patil
>>> >
>>> >
>>> >
>>> > --
>>> > View this message in context: http://apache-flink-user-maili
>>> ng-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-
>>> RocksDB-as-statebackend-tp11752p14013.html
>>> > Sent from the Apache Flink User Mailing List archive. mailing list
>>> archive at Nabble.com.
>>>
>>>
>>
>

Reply via email to