gt;>>>>>>>> Ebru: could you explain in a little more details how does
>>>>>>>>> your Job(s)
>>>>>>>>>> look like? Could you post some code? If you are just using
>>>>>>>>> maps and
>>>
4:43, Javier Lopez
>>>>>>> mailto:javier.lo...@zalando.de>>
>>>>>>>> wrote:
>>>>>>>> Hi,
>>>>>>>> We have been facing a similar problem. We have tried some
>>>>>>> different
>>>>>>&g
ase share
>
> that code?
>
> If
> there is no stateful operation at all, it's strange where
>
> the list
>
> state instances are coming from.
> On Tue, Nov 7, 2017 at 2:35 PM, ebru
>
>
>
> wrote:
> Hi Ufuk,
> We don’t explicitly define any state
;> that Flavio has, we restart the taskmanagers once they reach
>>>>>> a
>>>>>>> memory threshold. We created a small test to remove all of
>>>>>> our
>>>>>>> dependencies and leave only flink native libraries. This
>&
ilters there shouldn’t be any network transfers involved,
>>>>>>>>> aside
>>>>>>>>>> from Source and Sink functions.
>>>>>>>>>> Piotrek
>>>>>>>>>> On 8 Nov 2017, at 12:54,
use case,
you have to manually call `clear()` on the state instance in
order
to
release the managed state.
Best,
Ufuk
On Tue, Nov 7, 2017 at 12:43 PM, ebru
wrote:
Begin forwarded message:
From: ebru
Subject: Re: Flink memory leak
Date: 7 November 2017 at 14:09:17 GMT+3
To: Ufuk Celebi
Hi Ufuk,
Ther
>>> Kien, but it didn't work. We have a workaround similar to
>>>>>> the one
>>>>>>> that Flavio has, we restart the taskmanagers once they reach
>>>>>> a
>>>>>>> memory threshold. We created a small test to
;>>> per node. We have one job that uses 56 slots, and we cannot
>>>> execute
>>>>> that job 5 times in a row because the whole cluster dies. If
>>>> you
>>>>> want, we can publish our test job.
>>>>> Regards,
>>>>
gt; On 8. Nov 2017, at 10:25, Flavio Pompermaier
>>
>>> wrote:
>>> We also have the same problem in production. At the moment
>> the
>>> solution is to restart the entire Flink cluster after every
>> job..
>>> We've tried to reproduce this problem wi
& @Piotr Could you please have a look at this? You
>>>> both
>>>>>>>>> recently worked on the network stack and might be most
>>>> familiar with
>>>>>>>>> this.
>>>>>>>>> On 8. Nov 2017, at 1
Ufuk
On Tue, Nov 7, 2017 at 12:43 PM, ebru
wrote:
Begin forwarded message:
From: ebru
Subject: Re: Flink memory leak
Date: 7 November 2017 at 14:09:17 GMT+3
To: Ufuk Celebi
Hi Ufuk,
There are there snapshots of htop output.
1. snapshot is initial state.
2. snapshot is after submitted one job.
t;>>> <mailto:pomperma...@okkam.it>>
>> >>>>> wrote:
>> >>>>>
>> >>>>> We also have the same problem in production. At the moment the
>> >>>>> solution is to restart the entire Flink cluster after e
ak are
> >>>>> correlated..
> >>>>>
> >>>>> Best,
> >>>>> Flavio
> >>>>>
> >>>>> On Wed, Nov 8, 2017 at 9:51 AM, ÇETİNKAYA EBRU ÇETİNKAYA EBRU
> >>>>> mailto:b20926...@cs.hacettepe
t;>> On 2017-11-07 16:53, Ufuk Celebi wrote:
>>>>> Do you use any windowing? If yes, could you please share that code?
>>>>> If
>>>>> there is no stateful operation at all, it's strange where the list
>>>>> state instances a
state instances are coming from.
>>
>> On Tue, Nov 7, 2017 at 2:35 PM, ebru > <mailto:b20926...@cs.hacettepe.edu.tr>>
>> wrote:
>> Hi Ufuk,
>>
>> We don’t explicitly define any state descriptor. We only use map
>> and filters
>> operato
>>>> Do you use any windowing? If yes, could you please share that code?
>>>> If
>>>> there is no stateful operation at all, it's strange where the list
>>>> state instances are coming from.
>>>>
>>>> On Tue, Nov 7, 201
@apache.org>> wrote:
>
> Hey Ebru, the memory usage might be increasing as long as a job is
> running.
> This is expected (also in the case of multiple running jobs). The
> screenshots are not helpful in that regard. :-(
>
> What kind of stateful operations are you using
’t explicitly define any state descriptor. We only use map
>>> and filters
>>> operator. We thought that gc handle clearing the flink’s internal
>>> states.
>>> So how can we manage the memory if it is always increasing?
>>>
>>> - Ebru
>>
state.
Best,
Ufuk
On Tue, Nov 7, 2017 at 12:43 PM, ebru
wrote:
Begin forwarded message:
From: ebru
Subject: Re: Flink memory leak
Date: 7 November 2017 at 14:09:17 GMT+3
To: Ufuk Celebi
Hi Ufuk,
There are there snapshots of htop output.
1. snapshot is initial state.
2. snapshot is af
.@apache.org>> wrote:
>>>
>>> Hey Ebru, the memory usage might be increasing as long as a job is running.
>>> This is expected (also in the case of multiple running jobs). The
>>> screenshots are not helpful in that regard. :-(
>>>
>>> What kind of stateful
.
>> This is expected (also in the case of multiple running jobs). The
>> screenshots are not helpful in that regard. :-(
>>
>> What kind of stateful operations are you using? Depending on your use case,
>> you have to manually call `clear()` on the state instance in orde
gt;>> screenshots are not helpful in that regard. :-(
>>>>
>>>> What kind of stateful operations are you using? Depending on your use
>>>> case,
>>>> you have to manually call `clear()` on the state instance in order to
>>>> release the m
ave to manually call `clear()` on the state instance in order to
> release the managed state.
>
> Best,
>
> Ufuk
>
> On Tue, Nov 7, 2017 at 12:43 PM, ebru <mailto:b20926...@cs.hacettepe.edu.tr>> wrote:
>
>
>
> Begin forwarded message:
>
> From: ebr
ations are you using? Depending on your use
>>> case,
>>> you have to manually call `clear()` on the state instance in order to
>>> release the managed state.
>>>
>>> Best,
>>>
>>> Ufuk
>>>
>>> On Tue, Nov 7, 2017 at 12
managed state.
>>
>> Best,
>>
>> Ufuk
>>
>> On Tue, Nov 7, 2017 at 12:43 PM, ebru wrote:
>>>
>>>
>>>
>>> Begin forwarded message:
>>>
>>> From: ebru
>>> Subject: Re: Flink memory leak
>>
nce in order to
> release the managed state.
>
> Best,
>
> Ufuk
>
> On Tue, Nov 7, 2017 at 12:43 PM, ebru wrote:
>>
>>
>>
>> Begin forwarded message:
>>
>> From: ebru
>> Subject: Re: Flink memory leak
>> Date: 7 November 2017 at 1
to manually call `clear()` on the state instance in order to release
> the managed state.
>
> Best,
>
> Ufuk
>
> On Tue, Nov 7, 2017 at 12:43 PM, ebru <mailto:b20926...@cs.hacettepe.edu.tr>> wrote:
>
>
>> Begin forwarded message:
>>
>> Fro
Hey Ebru,
let me pull in Aljoscha (CC'd) who might have an idea what's causing this.
Since multiple jobs are running, it will be hard to understand to
which job the state descriptors from the heap snapshot belong to.
- Is it possible to isolate the problem and reproduce the behaviour
with only a
Hi,
We are using Flink 1.3.1 in production, we have one job manager and 3
task managers in standalone mode. Recently, we've noticed that we have
memory related problems. We use docker container to serve Flink cluster.
We have 300 slots and 20 jobs are running with parallelism of 10. Also
the
29 matches
Mail list logo