u moving the job jar to the* ~/flink-1.4.2/lib path* ?
>
> On Mon, Apr 9, 2018 at 12:23 PM, Javier Lopez <javier.lo...@zalando.de>
> wrote:
>
>> Hi,
>>
>> We had the same metaspace problem, it was solved by adding the jar file
>> to the /lib path of every ta
Hi,
We had the same metaspace problem, it was solved by adding the jar file to
the /lib path of every task manager, as explained here
https://ci.apache.org/projects/flink/flink-docs-release-1.4/monitoring/debugging_classloading.html#avoiding-dynamic-classloading.
As well we added these java
could cause this
problem? Our workaround is to restart the master, but we cannot keep doing
this in the long term.
Thanks for all your support, it has been helpful.
On 16 November 2017 at 15:27, Javier Lopez <javier.lo...@zalando.de> wrote:
> Hi Piotr,
>
> Sorry for the late re
gt; Did you try it also without a Kafka producer?
> Piotrek
>
> On 8 Nov 2017, at 14:57, Javier Lopez <javier.lo...@zalando.de> wrote:
> Hi,
> You don't need data. With data it will die faster. I tested as well with
a small data set, using the fromElements source, but it wil
nks for sharing this job.
>
> Do I need to feed some data to the Kafka to reproduce this issue with your
> script?
>
> Does this OOM issue also happen when you are not using the Kafka
> source/sink?
>
> Piotrek
>
> On 8 Nov 2017, at 14:08, Javier Lopez <javier.lo...
; It would be helpful if you share your test job with us.
>>> Which configurations did you try?
>>>
>>> -Ebru
>>>
>>> On 8 Nov 2017, at 14:43, Javier Lopez <javier.lo...@zalando.de>
>>> wrote:
>>>
>>> Hi,
>>>
>&
Hi,
We have been facing a similar problem. We have tried some different
configurations, as proposed in other email thread by Flavio and Kien, but
it didn't work. We have a workaround similar to the one that Flavio has, we
restart the taskmanagers once they reach a memory threshold. We created a
; allocations, that could potentially leak memory)
>> Also, are you passing any special garbage collector options? (Maybe some
>> classes are not unloaded)
>> Are you using anything else that is special (such as protobuf or avro
>> formats, or any other big library)?
>&g
Hi all,
we are starting a lot of Flink jobs (streaming), and after we have started
200 or more jobs we see that the non-heap memory in the taskmanagers
increases a lot, to the point of killing the instances. We found out that
every time we start a new job, the committed non-heap memory increases
Hi all,
One of our use cases implies to do some Stream2Batch processing. We are
using Flink to read from a streaming source and deliver files to S3, after
applying some transformation to the stream. These Flink jobs are not
running 24/7, they are running on demand and consume a finite number of
10 matches
Mail list logo