Happy to hear that!
On Thu, Jan 5, 2017 at 1:34 PM, Paulo Cezar wrote:
> Hi Stephan, thanks for your support.
>
> I was able to track the problem a few days ago. Unirest was the one to
> blame, I was using it on some mapfuncionts to connect to external services
> and for some reason it was usi
Hi Stephan, thanks for your support.
I was able to track the problem a few days ago. Unirest was the one to
blame, I was using it on some mapfuncionts to connect to external services
and for some reason it was using insane amounts of virtual memory.
Paulo Cezar
On Mon, Dec 19, 2016 at 11:30 AM S
Hi Paulo!
Hmm, interesting. The high discrepancy between virtual and physical memory
usually means that the process either maps large files into memory, or that
it pre-allocates a lot of memory without immediately using it.
Neither of these things are done by Flink.
Could this be an effect of eit
- Are you using RocksDB?
No.
- What is your flink configuration, especially around memory settings?
I'm using default config with 2GB for jobmanager and 5GB for taskmanagers.
I'm starting flink via "./bin/yarn-session.sh -d -n 5 -jm 2048 -tm 5120 -s
4 -nm 'Flink'"
- What do you use for T
at 5:47 PM, Paulo Cezar wrote:
>
>> Hi Folks,
>>
>> I'm running Flink (1.2-SNAPSHOT nightly) on YARN (Hadoop 2.7.2). A few
>> hours after I start a streaming job (built using kafka connect 0.10_2.11)
>> it gets killed seemingly for no reason. After inspecti
built using kafka connect 0.10_2.11)
> it gets killed seemingly for no reason. After inspecting the logs my best
> guess is that YARN is killing containers due to high virtual memory usage.
>
> Any guesses on why this might be happening or tips of what I should be
> looking for?
Hi Folks,
I'm running Flink (1.2-SNAPSHOT nightly) on YARN (Hadoop 2.7.2). A few
hours after I start a streaming job (built using kafka connect 0.10_2.11)
it gets killed seemingly for no reason. After inspecting the logs my best
guess is that YARN is killing containers due to high virtual m