[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181959#comment-13181959
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-trunk #951 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/951/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181950#comment-13181950
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-0.23-Build #153 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/153/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/

[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181939#comment-13181939
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-trunk #918 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/918/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-07 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181931#comment-13181931
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-0.23-Build #131 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/131/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/bl

[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181856#comment-13181856
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-trunk-Commit #1531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1531/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181844#comment-13181844
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-0.23-Commit #363 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/363/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/sr

[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181837#comment-13181837
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Common-trunk-Commit #1511 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1511/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181830#comment-13181830
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-trunk-Commit #1584 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1584/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181818#comment-13181818
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Common-0.23-Commit #352 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/352/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/cont

[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181810#comment-13181810
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-0.23-Commit #342 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/342/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/

[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178378#comment-13178378
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-trunk #913 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/913/])
HDFS-554. Use System.arraycopy in BlockInfo.ensureCapacity. (harsh)

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1226239
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-01 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178160#comment-13178160
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-trunk #945 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/945/])
HDFS-554. Use System.arraycopy in BlockInfo.ensureCapacity. (harsh)

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1226239
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-01 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178157#comment-13178157
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-trunk-Commit #1505 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1505/])
HDFS-554. Use System.arraycopy in BlockInfo.ensureCapacity. (harsh)

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1226239
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-01 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178153#comment-13178153
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Common-trunk-Commit #1484 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1484/])
HDFS-554. Use System.arraycopy in BlockInfo.ensureCapacity. (harsh)

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1226239
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-01 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178151#comment-13178151
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-trunk-Commit #1556 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1556/])
HDFS-554. Use System.arraycopy in BlockInfo.ensureCapacity. (harsh)

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1226239
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-01 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178147#comment-13178147
 ] 

Harsh J commented on HDFS-554:
--

Findbugs result is not related to this. Committing. Thanks for the review guys!

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176137#comment-13176137
 ] 

Hadoop QA commented on HDFS-554:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12508666/HDFS-554.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 20 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

-1 release audit.  The applied patch generated 1 release audit warnings 
(more than the trunk's current 0 warnings).

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1741//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1741//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1741//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1741//console

This message is automatically generated.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-17 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171622#comment-13171622
 ] 

Aaron T. Myers commented on HDFS-554:
-

The patch looks good to me. +1 pending clean Jenkins results.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-11-22 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13155338#comment-13155338
 ] 

Todd Lipcon commented on HDFS-554:
--

I think we want to use {{last*3}} instead of {{old.length}} here - it may be a 
little shorter.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-11-21 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13154260#comment-13154260
 ] 

Hadoop QA commented on HDFS-554:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504505/HDFS-554.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.fs.viewfs.TestViewFsHdfs

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1593//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1593//console

This message is automatically generated.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2009-08-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12748603#action_12748603
 ] 

Suresh Srinivas commented on HDFS-554:
--

Also the array size that we are copying is generally 9 (assuming 3 replicas)

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Priority: Minor
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2009-08-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12745470#action_12745470
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-554:
-

I agree that we should use System.arraycopy() in general.  We should do some 
benchmarks for this issue.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Priority: Minor
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.