[ https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178160#comment-13178160 ]
Hudson commented on HDFS-554: ----------------------------- Integrated in Hadoop-Mapreduce-trunk #945 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/945/]) HDFS-554. Use System.arraycopy in BlockInfo.ensureCapacity. (harsh) harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1226239 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java > BlockInfo.ensureCapacity may get a speedup from System.arraycopy() > ------------------------------------------------------------------ > > Key: HDFS-554 > URL: https://issues.apache.org/jira/browse/HDFS-554 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Affects Versions: 0.21.0 > Reporter: Steve Loughran > Assignee: Harsh J > Priority: Minor > Fix For: 0.24.0 > > Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt > > > BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into > the expanded array. {{System.arraycopy()}} is generally much faster for > this, as it can do a bulk memory copy. There is also the typesafe Java6 > {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira