[ https://issues.apache.org/jira/browse/HDFS-3359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13268376#comment-13268376 ]
Hudson commented on HDFS-3359: ------------------------------ Integrated in Hadoop-Mapreduce-trunk #1069 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1069/]) HDFS-3359. DFSClient.close should close cached sockets. Contributed by Todd Lipcon. (Revision 1333624) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333624 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java > DFSClient.close should close cached sockets > ------------------------------------------- > > Key: HDFS-3359 > URL: https://issues.apache.org/jira/browse/HDFS-3359 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs client > Affects Versions: 0.22.0, 2.0.0 > Reporter: Todd Lipcon > Assignee: Todd Lipcon > Priority: Critical > Fix For: 2.0.0 > > Attachments: hdfs-3359.txt, hdfs-3359.txt > > > Some applications like the TT/JT (pre-2.0) and probably the RM/NM cycle > through DistributedFileSystem objects reasonably frequently. So long as they > call close() it isn't a big problem, except that currently DFSClient.close() > doesn't explicitly close the SocketCache. So unless a full GC runs (causing > the references to get finalized), many SocketCaches can get orphaned, each > with many open sockets inside. We should fix the close() function to close > all cached sockets. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira