[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181856#comment-13181856
 ] 

Hudson commented on HDFS-554:
-----------------------------

Integrated in Hadoop-Mapreduce-trunk-Commit #1531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1531/])
    Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

                
> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> ------------------------------------------------------------------
>
>                 Key: HDFS-554
>                 URL: https://issues.apache.org/jira/browse/HDFS-554
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.21.0
>            Reporter: Steve Loughran
>            Assignee: Harsh J
>            Priority: Minor
>             Fix For: 0.23.1
>
>         Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to