[
https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13450978#comment-13450978
]
Hudson commented on HIVE-3098:
------------------------------
Integrated in Hive-trunk-h0.21 #1654 (See
[https://builds.apache.org/job/Hive-trunk-h0.21/1654/])
HIVE-3098 : Memory leak from large number of FileSystem instances in
FileSystem.CACHE (Mithun R via Ashutosh Chauhan) (Revision 1382040)
Result = FAILURE
hashutosh :
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1382040
Files :
*
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/TUGIBasedProcessor.java
*
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
*
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
*
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java
*
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java
> Memory leak from large number of FileSystem instances in FileSystem.CACHE
> -------------------------------------------------------------------------
>
> Key: HIVE-3098
> URL: https://issues.apache.org/jira/browse/HIVE-3098
> Project: Hive
> Issue Type: Bug
> Components: Shims
> Affects Versions: 0.9.0
> Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security
> turned on.
> Reporter: Mithun Radhakrishnan
> Assignee: Mithun Radhakrishnan
> Fix For: 0.10.0
>
> Attachments: Hive-3098_(FS_closeAllForUGI()).patch, hive-3098.patch,
> Hive_3098.patch
>
>
> The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing
> the Oracle backend).
> The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads,
> in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had
> 1000000 instances of FileSystem, whose combined retained-mem consumed the
> entire heap.
> It boiled down to hadoop::UserGroupInformation::equals() being implemented
> such that the "Subject" member is compared for equality ("=="), and not
> equivalence (".equals()"). This causes equivalent UGI instances to compare as
> unequal, and causes a new FileSystem instance to be created and cached.
> The UGI.equals() is so implemented, incidentally, as a fix for yet another
> problem (HADOOP-6670); so it is unlikely that that implementation can be
> modified.
> The solution for this is to check for UGI equivalence in HCatalog (i.e. in
> the Hive metastore), using an cache for UGI instances in the shims.
> I have a patch to fix this. I'll upload it shortly. I just ran an overnight
> test to confirm that the memory-leak has been arrested.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira