[ 
https://issues.apache.org/jira/browse/HIVE-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15317100#comment-15317100
 ] 

Naveen Gangam commented on HIVE-13749:
--------------------------------------

Thanks [~daijy] 
I have been running with some added instrumentation in the HMS code to figure 
out the cache sizes before and after. But your idea seems better, seeking info 
from the hadoop's end.
There are 3 general areas that seem to be adding objects to the cache.
1) The compactor.Initiator and CompactorThread create about ~420k objects. 
These seem to be addressed in HIVE-13151. This environment is not running with 
this fix.
2) The Warehouse.getFs() and Warehouse.getFileStatusesForLocation() are invoked 
about ~900k times, but not all calls result in new object in the cache.
3) A small % of the calls are from drop_table_core. 

I will try to see other areas that use these FS apis that could be adding to 
this cache.

Thejas, the fix from HIVE-3098 no longer exists in the codebase. It has been 
replaced by the fix in HIVE-8228 (simliar intent). The root cause could very 
well be the initiator thread. I will check their configuration to affirm this 
and use HIVE-13151 if needed. Thanks

> Memory leak in Hive Metastore
> -----------------------------
>
>                 Key: HIVE-13749
>                 URL: https://issues.apache.org/jira/browse/HIVE-13749
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 1.1.0
>            Reporter: Naveen Gangam
>            Assignee: Naveen Gangam
>         Attachments: HIVE-13749.patch, Top_Consumers7.html
>
>
> Looking a heap dump of 10GB, a large number of Configuration objects(> 66k 
> instances) are being retained. These objects along with its retained set is 
> occupying about 95% of the heap space. This leads to HMS crashes every few 
> days.
> I will attach an exported snapshot from the eclipse MAT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to