Saqeeb Shaikh created ATLAS-3525:
------------------------------------

             Summary: Hive server runs into OOM with huge heap trace of atlas 
logger.
                 Key: ATLAS-3525
                 URL: https://issues.apache.org/jira/browse/ATLAS-3525
             Project: Atlas
          Issue Type: Bug
    Affects Versions: 0.8.1
            Reporter: Saqeeb Shaikh
            Assignee: Saqeeb Shaikh


Atlas hook produces large message on Kafka, which Kafka fails with 
RecordTooLargeException ::

 
{code:java}
hook.AtlasHook (AtlasHook.java:notifyEntitiesInternal(138)) - Failed to send 
notification - attempt #1; error=java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.RecordTooLargeException: The message is 
625562624 bytes when serialized which is larger than the maximum request size 
you have configured with the max.request.size configuration.{code}
 

The failed messages are logged in a log file, and thus Atlas's logger has a 
huge trace on the heap.

Atlas hook performs an in memory compression of large messages ( handled in 
https://issues.apache.org/jira/browse/ATLAS-2075 ). But due to insufficient 
heap space available this in memory compression fails.

 

The purpose of this Jira is to figure out a way to handle this compression in a 
manner that it doesn't spikes the heap usage. 

 

Temporary work-around for this can be increasing the heap size to double or 
triple or not logging the failed message, instead just log the size of the 
failed message. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to