[ 
https://issues.apache.org/jira/browse/KAFKA-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16598305#comment-16598305
 ] 

Seweryn Habdank-Wojewodzki edited comment on KAFKA-7214 at 8/31/18 6:27 AM:
----------------------------------------------------------------------------

The keyword in all those errors is: KSTREAM-SOURCE-XXXXXXXXX

Kafka 1.1.1 makes it even more horrible :-(, but ... 

... it seems this is related to memory consumption and perhaps number of 
threads used by streaming app.
I had increased JVM heap from 348 MB to 1GB and decreased number of threads 
from 16 to 2 and it seems not happened so often.

I will check this further.

But I am going back to my comment from bug report KAFKA-6777. I think (after 
code review), there are very many places in code, where potentially OutOfMemory 
error ist not handled properly and they could be converted in any kind of 
random errors or even completely swallowed giving random behaviour of clients 
or servers.

I would expect, that OutOfMemory will lead to fast application crash with clear 
infomation, where is the problem.


was (Author: habdank):
Kafka 1.1.1 makes it even more horrible :-(, but ... 

... it seems this is related to memory consumption and perhaps number of 
threads used by streaming app.
I had increased JVM heap from 348 MB to 1GB and decreased number of threads 
from 16 to 2 and it seems not happened so often.

I will check this further.

But I am going back to my comment from bug report KAFKA-6777. I think (after 
code review), there are very many places in code, where potentially OutOfMemory 
error ist not handled properly and they could be converted in any kind of 
random errors or even completely swallowed giving random behaviour of clients 
or servers.

I would expect, that OutOfMemory will lead to fast application crash with clear 
infomation, where is the problem.

> Mystic FATAL error
> ------------------
>
>                 Key: KAFKA-7214
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7214
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>    Affects Versions: 0.11.0.3, 1.1.1
>            Reporter: Seweryn Habdank-Wojewodzki
>            Priority: Critical
>
> Dears,
> Very often at startup of the streaming application I got exception:
> {code}
> Exception caught in process. taskId=0_1, processor=KSTREAM-SOURCE-0000000000, 
> topic=my_instance_medium_topic, partition=1, offset=198900203; 
> [org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:212),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:347),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:420),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:339),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:648),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:513),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:482),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:459)]
>  in thread 
> my_application-my_instance-my_instance_medium-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62
> {code}
> and then (without shutdown request from my side):
> {code}
> 2018-07-30 07:45:02 [ar313] [INFO ] StreamThread:912 - stream-thread 
> [my_application-my_instance-my_instance-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62]
>  State transition from PENDING_SHUTDOWN to DEAD.
> {code}
> What is this?
> How to correctly handle it?
> Thanks in advance for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to