[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181856#comment-13181856
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-trunk-Commit #1531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1531/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181854#comment-13181854
 ] 

Hudson commented on HDFS-2349:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1531/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DN should log a WARN, not an INFO when it detects a corruption during block 
> transfer
> 
>
> Key: HDFS-2349
> URL: https://issues.apache.org/jira/browse/HDFS-2349
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.20.204.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.23.1
>
> Attachments: HDFS-2349.diff
>
>
> Currently, in DataNode.java, we have:
> {code}
>   LOG.info("Can't replicate block " + block
>   + " because on-disk length " + onDiskLength 
>   + " is shorter than NameNode recorded length " + 
> block.getNumBytes());
> {code}
> This log is better off as a WARN as it indicates (and also reports) a 
> corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181855#comment-13181855
 ] 

Hudson commented on HDFS-1314:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1531/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> dfs.blocksize accepts only absolute value
> -
>
> Key: HDFS-1314
> URL: https://issues.apache.org/jira/browse/HDFS-1314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Karim Saadah
>Assignee: Sho Shimauchi
>Priority: Minor
>  Labels: newbie
> Fix For: 0.23.1
>
> Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
> hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt
>
>
> Using "dfs.block.size=8388608" works 
> but "dfs.block.size=8mb" does not.
> Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException.
> (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181852#comment-13181852
 ] 

Hudson commented on HDFS-2729:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1531/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update BlockManager's comments regarding the invalid block set
> --
>
> Key: HDFS-2729
> URL: https://issues.apache.org/jira/browse/HDFS-2729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-2729.patch
>
>
> Looks like after HDFS-82 was covered at some point, the comments and logs 
> still carry presence of two sets when there really is just one set.
> This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181853#comment-13181853
 ] 

Hudson commented on HDFS-2726:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1531/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> "Exception in createBlockOutputStream" shouldn't delete exception stack trace
> -
>
> Key: HDFS-2726
> URL: https://issues.apache.org/jira/browse/HDFS-2726
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Michael Bieniosek
>Assignee: Harsh J
> Fix For: 0.23.1
>
> Attachments: HDFS-2726.patch
>
>
> I'm occasionally (1/5000 times) getting this error after upgrading everything 
> to hadoop-0.18:
> 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
> blk_624229997631234952_8205908
> DFSClient contains the logging code:
> LOG.info("Exception in createBlockOutputStream " + ie);
> This would be better written with ie as the second argument to LOG.info, so 
> that the stack trace could be preserved.  As it is, I don't know how to start 
> debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181844#comment-13181844
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Mapreduce-0.23-Commit #363 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/363/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/sr

[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181843#comment-13181843
 ] 

Hudson commented on HDFS-1314:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #363 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/363/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project

[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181841#comment-13181841
 ] 

Hudson commented on HDFS-2726:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #363 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/363/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project

[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181840#comment-13181840
 ] 

Hudson commented on HDFS-2729:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #363 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/363/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project

[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181842#comment-13181842
 ] 

Hudson commented on HDFS-2349:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #363 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/363/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project

[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181837#comment-13181837
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Common-trunk-Commit #1511 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1511/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181834#comment-13181834
 ] 

Hudson commented on HDFS-2726:
--

Integrated in Hadoop-Common-trunk-Commit #1511 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1511/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> "Exception in createBlockOutputStream" shouldn't delete exception stack trace
> -
>
> Key: HDFS-2726
> URL: https://issues.apache.org/jira/browse/HDFS-2726
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Michael Bieniosek
>Assignee: Harsh J
> Fix For: 0.23.1
>
> Attachments: HDFS-2726.patch
>
>
> I'm occasionally (1/5000 times) getting this error after upgrading everything 
> to hadoop-0.18:
> 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
> blk_624229997631234952_8205908
> DFSClient contains the logging code:
> LOG.info("Exception in createBlockOutputStream " + ie);
> This would be better written with ie as the second argument to LOG.info, so 
> that the stack trace could be preserved.  As it is, I don't know how to start 
> debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181836#comment-13181836
 ] 

Hudson commented on HDFS-1314:
--

Integrated in Hadoop-Common-trunk-Commit #1511 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1511/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> dfs.blocksize accepts only absolute value
> -
>
> Key: HDFS-1314
> URL: https://issues.apache.org/jira/browse/HDFS-1314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Karim Saadah
>Assignee: Sho Shimauchi
>Priority: Minor
>  Labels: newbie
> Fix For: 0.23.1
>
> Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
> hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt
>
>
> Using "dfs.block.size=8388608" works 
> but "dfs.block.size=8mb" does not.
> Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException.
> (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181835#comment-13181835
 ] 

Hudson commented on HDFS-2349:
--

Integrated in Hadoop-Common-trunk-Commit #1511 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1511/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DN should log a WARN, not an INFO when it detects a corruption during block 
> transfer
> 
>
> Key: HDFS-2349
> URL: https://issues.apache.org/jira/browse/HDFS-2349
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.20.204.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.23.1
>
> Attachments: HDFS-2349.diff
>
>
> Currently, in DataNode.java, we have:
> {code}
>   LOG.info("Can't replicate block " + block
>   + " because on-disk length " + onDiskLength 
>   + " is shorter than NameNode recorded length " + 
> block.getNumBytes());
> {code}
> This log is better off as a WARN as it indicates (and also reports) a 
> corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181833#comment-13181833
 ] 

Hudson commented on HDFS-2729:
--

Integrated in Hadoop-Common-trunk-Commit #1511 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1511/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update BlockManager's comments regarding the invalid block set
> --
>
> Key: HDFS-2729
> URL: https://issues.apache.org/jira/browse/HDFS-2729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-2729.patch
>
>
> Looks like after HDFS-82 was covered at some point, the comments and logs 
> still carry presence of two sets when there really is just one set.
> This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181830#comment-13181830
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-trunk-Commit #1584 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1584/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181829#comment-13181829
 ] 

Hudson commented on HDFS-1314:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1584 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1584/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> dfs.blocksize accepts only absolute value
> -
>
> Key: HDFS-1314
> URL: https://issues.apache.org/jira/browse/HDFS-1314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Karim Saadah
>Assignee: Sho Shimauchi
>Priority: Minor
>  Labels: newbie
> Fix For: 0.23.1
>
> Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
> hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt
>
>
> Using "dfs.block.size=8388608" works 
> but "dfs.block.size=8mb" does not.
> Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException.
> (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181827#comment-13181827
 ] 

Hudson commented on HDFS-2726:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1584 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1584/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> "Exception in createBlockOutputStream" shouldn't delete exception stack trace
> -
>
> Key: HDFS-2726
> URL: https://issues.apache.org/jira/browse/HDFS-2726
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Michael Bieniosek
>Assignee: Harsh J
> Fix For: 0.23.1
>
> Attachments: HDFS-2726.patch
>
>
> I'm occasionally (1/5000 times) getting this error after upgrading everything 
> to hadoop-0.18:
> 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
> blk_624229997631234952_8205908
> DFSClient contains the logging code:
> LOG.info("Exception in createBlockOutputStream " + ie);
> This would be better written with ie as the second argument to LOG.info, so 
> that the stack trace could be preserved.  As it is, I don't know how to start 
> debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181828#comment-13181828
 ] 

Hudson commented on HDFS-2349:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1584 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1584/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DN should log a WARN, not an INFO when it detects a corruption during block 
> transfer
> 
>
> Key: HDFS-2349
> URL: https://issues.apache.org/jira/browse/HDFS-2349
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.20.204.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.23.1
>
> Attachments: HDFS-2349.diff
>
>
> Currently, in DataNode.java, we have:
> {code}
>   LOG.info("Can't replicate block " + block
>   + " because on-disk length " + onDiskLength 
>   + " is shorter than NameNode recorded length " + 
> block.getNumBytes());
> {code}
> This log is better off as a WARN as it indicates (and also reports) a 
> corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181826#comment-13181826
 ] 

Hudson commented on HDFS-2729:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1584 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1584/])
Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23, updating CHANGES.txt for trunk.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update BlockManager's comments regarding the invalid block set
> --
>
> Key: HDFS-2729
> URL: https://issues.apache.org/jira/browse/HDFS-2729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-2729.patch
>
>
> Looks like after HDFS-82 was covered at some point, the comments and logs 
> still carry presence of two sets when there really is just one set.
> This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181818#comment-13181818
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Common-0.23-Commit #352 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/352/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/cont

[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181817#comment-13181817
 ] 

Hudson commented on HDFS-1314:
--

Integrated in Hadoop-Common-0.23-Commit #352 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/352/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c

[jira] [Updated] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

 Target Version/s:   (was: 0.23.1)
Affects Version/s: 0.23.0
Fix Version/s: (was: 0.24.0)
   0.23.1

Backported to 0.23.

> dfs.blocksize accepts only absolute value
> -
>
> Key: HDFS-1314
> URL: https://issues.apache.org/jira/browse/HDFS-1314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Karim Saadah
>Assignee: Sho Shimauchi
>Priority: Minor
>  Labels: newbie
> Fix For: 0.23.1
>
> Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
> hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt
>
>
> Using "dfs.block.size=8388608" works 
> but "dfs.block.size=8mb" does not.
> Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException.
> (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181816#comment-13181816
 ] 

Hudson commented on HDFS-2349:
--

Integrated in Hadoop-Common-0.23-Commit #352 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/352/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c

[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181815#comment-13181815
 ] 

Hudson commented on HDFS-2726:
--

Integrated in Hadoop-Common-0.23-Commit #352 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/352/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c

[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181814#comment-13181814
 ] 

Hudson commented on HDFS-2729:
--

Integrated in Hadoop-Common-0.23-Commit #352 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/352/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c

[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Target Version/s:   (was: 0.23.1)
   Fix Version/s: (was: 0.24.0)
  0.23.1

Backported to 0.23.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2726:
--

 Target Version/s:   (was: 0.23.1)
Affects Version/s: 0.23.0
Fix Version/s: (was: 0.24.0)
   0.23.1

Backported to 0.23.

> "Exception in createBlockOutputStream" shouldn't delete exception stack trace
> -
>
> Key: HDFS-2726
> URL: https://issues.apache.org/jira/browse/HDFS-2726
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Michael Bieniosek
>Assignee: Harsh J
> Fix For: 0.23.1
>
> Attachments: HDFS-2726.patch
>
>
> I'm occasionally (1/5000 times) getting this error after upgrading everything 
> to hadoop-0.18:
> 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
> blk_624229997631234952_8205908
> DFSClient contains the logging code:
> LOG.info("Exception in createBlockOutputStream " + ie);
> This would be better written with ie as the second argument to LOG.info, so 
> that the stack trace could be preserved.  As it is, I don't know how to start 
> debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181810#comment-13181810
 ] 

Hudson commented on HDFS-554:
-

Integrated in Hadoop-Hdfs-0.23-Commit #342 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/342/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/

[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181808#comment-13181808
 ] 

Hudson commented on HDFS-1314:
--

Integrated in Hadoop-Hdfs-0.23-Commit #342 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/342/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contr

[jira] [Updated] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2729:
--

Target Version/s:   (was: 0.23.1)
   Fix Version/s: (was: 0.24.0)
  0.23.1

Backported to 0.23.

> Update BlockManager's comments regarding the invalid block set
> --
>
> Key: HDFS-2729
> URL: https://issues.apache.org/jira/browse/HDFS-2729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: HDFS-2729.patch
>
>
> Looks like after HDFS-82 was covered at some point, the comments and logs 
> still carry presence of two sets when there really is just one set.
> This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181807#comment-13181807
 ] 

Hudson commented on HDFS-2349:
--

Integrated in Hadoop-Hdfs-0.23-Commit #342 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/342/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contr

[jira] [Updated] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2349:
--

Target Version/s:   (was: 0.23.1)
   Fix Version/s: (was: 0.24.0)
  0.23.1

Backported to 0.23.

> DN should log a WARN, not an INFO when it detects a corruption during block 
> transfer
> 
>
> Key: HDFS-2349
> URL: https://issues.apache.org/jira/browse/HDFS-2349
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.20.204.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.23.1
>
> Attachments: HDFS-2349.diff
>
>
> Currently, in DataNode.java, we have:
> {code}
>   LOG.info("Can't replicate block " + block
>   + " because on-disk length " + onDiskLength 
>   + " is shorter than NameNode recorded length " + 
> block.getNumBytes());
> {code}
> This log is better off as a WARN as it indicates (and also reports) a 
> corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181805#comment-13181805
 ] 

Hudson commented on HDFS-2729:
--

Integrated in Hadoop-Hdfs-0.23-Commit #342 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/342/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contr

[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181806#comment-13181806
 ] 

Hudson commented on HDFS-2726:
--

Integrated in Hadoop-Hdfs-0.23-Commit #342 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/342/])
merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to 
branch-0.23.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contr

[jira] [Commented] (HDFS-2742) HA: observed dataloss in replication stress test

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181796#comment-13181796
 ] 

Todd Lipcon commented on HDFS-2742:
---

This is really two separate bugs.
Bug 1 (HA specific): if a block report reports a RBW replica, and it's delayed 
on the SBN, then the file is closed before the SBN processes the delayed block 
report, it will mark the block as corrupt incorrectly. I'll post a patch to fix 
this bug - just running some more tests on it.

Bug 2 (non-HA): If a block is marked corrupt, and the DN does a block report, 
it will unmark the corrupt state of that block. I believe this is a regression 
of HDFS-900, but there wasn't any unit test with that bug fix so it's hard to 
know if it's the same or subtly different. I believe this affects the non-HA 
case as well, so I'll file a JIRA for it against trunk.

> HA: observed dataloss in replication stress test
> 
>
> Key: HDFS-2742
> URL: https://issues.apache.org/jira/browse/HDFS-2742
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node, ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Attachments: log-colorized.txt
>
>
> The replication stress test case failed over the weekend since one of the 
> replicas went missing. Still diagnosing the issue, but it seems like the 
> chain of events was something like:
> - a block report was generated on one of the nodes while the block was being 
> written - thus the block report listed the block as RBW
> - when the standby replayed this queued message, it was replayed after the 
> file was marked complete. Thus it marked this replica as corrupt
> - it asked the DN holding the corrupt replica to delete it. And, I think, 
> removed it from the block map at this time.
> - That DN then did another block report before receiving the deletion. This 
> caused it to be re-added to the block map, since it was "FINALIZED" now.
> - Replication was lowered on the file, and it counted the above replica as 
> non-corrupt, and asked for the other replicas to be deleted.
> - All replicas were lost.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2730) HA: Refactor shared HA-related test code into HATestUtils class

2012-01-06 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181793#comment-13181793
 ] 

Eli Collins commented on HDFS-2730:
---

+1 lgtm

> HA: Refactor shared HA-related test code into HATestUtils class
> ---
>
> Key: HDFS-2730
> URL: https://issues.apache.org/jira/browse/HDFS-2730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, test
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2730.txt
>
>
> A fair number of the HA tests are sharing code like 
> {{waitForStandbyToCatchUp}}, etc. We should refactor this code into an 
> HATestUtils class with static methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2764) HA: TestBackupNode is failing

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181786#comment-13181786
 ] 

Todd Lipcon commented on HDFS-2764:
---

Passes for me.. which revision are you testing from?

> HA: TestBackupNode is failing
> -
>
> Key: HDFS-2764
> URL: https://issues.apache.org/jira/browse/HDFS-2764
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>
> Looks like it has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2730) HA: Refactor shared HA-related test code into HATestUtils class

2012-01-06 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2730:
--

Attachment: hdfs-2730.txt

Simple patch attached

> HA: Refactor shared HA-related test code into HATestUtils class
> ---
>
> Key: HDFS-2730
> URL: https://issues.apache.org/jira/browse/HDFS-2730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, test
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2730.txt
>
>
> A fair number of the HA tests are sharing code like 
> {{waitForStandbyToCatchUp}}, etc. We should refactor this code into an 
> HATestUtils class with static methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2765) TestNameEditsConfigs is incorrectly swallowing IOE

2012-01-06 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181753#comment-13181753
 ] 

Hadoop QA commented on HDFS-2765:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12509749/HDFS-2765.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated 21 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

-1 release audit.  The applied patch generated 1 release audit warnings 
(more than the trunk's current 0 warnings).

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1765//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1765//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1765//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1765//console

This message is automatically generated.

> TestNameEditsConfigs is incorrectly swallowing IOE
> --
>
> Key: HDFS-2765
> URL: https://issues.apache.org/jira/browse/HDFS-2765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2765.patch
>
>
> The final portion of this test case is swallowing an IOE and in so doing 
> appearing to succeed, although it should not be succeeding as-written.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2737) HA: Automatically trigger log rolls periodically on the active NN

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181727#comment-13181727
 ] 

Todd Lipcon commented on HDFS-2737:
---

bq. My preference is to read the editlog in progress and leave the rolling of 
editlog as it is today, a local NN decision

That's not how it is today -- the SecondaryNameNode is the one that triggers it.

bq. For HA requiring an external trigger for editlog rolling is going back to 
previous inflexibility

It's not inflexible, since it's not tightly interlocked. The HA SBN may trigger 
a roll, but it's also fine if the NN rolls more often than this due to other 
triggers.

> HA: Automatically trigger log rolls periodically on the active NN
> -
>
> Key: HDFS-2737
> URL: https://issues.apache.org/jira/browse/HDFS-2737
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>
> Currently, the edit log tailing process can only read finalized log segments. 
> So, if the active NN is not rolling its logs periodically, the SBN will lag a 
> lot. This also causes many datanode messages to be queued up in the 
> PendingDatanodeMessage structure.
> To combat this, the active NN needs to roll its logs periodically -- perhaps 
> based on a time threshold, or perhaps based on a number of transactions. I'm 
> not sure yet whether it's better to have the NN roll on its own or to have 
> the SBN ask the active NN to roll its logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2765) TestNameEditsConfigs is incorrectly swallowing IOE

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181726#comment-13181726
 ] 

Todd Lipcon commented on HDFS-2765:
---

+1

> TestNameEditsConfigs is incorrectly swallowing IOE
> --
>
> Key: HDFS-2765
> URL: https://issues.apache.org/jira/browse/HDFS-2765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2765.patch
>
>
> The final portion of this test case is swallowing an IOE and in so doing 
> appearing to succeed, although it should not be succeeding as-written.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2737) HA: Automatically trigger log rolls periodically on the active NN

2012-01-06 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181725#comment-13181725
 ] 

Suresh Srinivas commented on HDFS-2737:
---

My preference is not to do trigger editlog rollover for the purpose of HA. When 
we undertook cleaning up editlog one of the things that was good about the 
design was, NN could roll editlogs independently of the 
backup/secondary/checkpointer. It became a local decision at the namenode. For 
HA requiring an external trigger for editlog rolling is going back to previous 
inflexibility. My preference is to read the editlog in progress and leave the 
rolling of editlog as it is today, a local NN decision.

> HA: Automatically trigger log rolls periodically on the active NN
> -
>
> Key: HDFS-2737
> URL: https://issues.apache.org/jira/browse/HDFS-2737
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>
> Currently, the edit log tailing process can only read finalized log segments. 
> So, if the active NN is not rolling its logs periodically, the SBN will lag a 
> lot. This also causes many datanode messages to be queued up in the 
> PendingDatanodeMessage structure.
> To combat this, the active NN needs to roll its logs periodically -- perhaps 
> based on a time threshold, or perhaps based on a number of transactions. I'm 
> not sure yet whether it's better to have the NN roll on its own or to have 
> the SBN ask the active NN to roll its logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2766) HA: test for case where standby partially reads log and then performs checkpoint

2012-01-06 Thread Todd Lipcon (Created) (JIRA)
HA: test for case where standby partially reads log and then performs checkpoint


 Key: HDFS-2766
 URL: https://issues.apache.org/jira/browse/HDFS-2766
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Assignee: Aaron T. Myers


Here's a potential bug case that we don't currently test for:
- SBN is reading a finalized edits file when NFS disappears halfway through (or 
some intermittent error happens)
- SBN performs a checkpoint and uploads it to the NN
- NN receives a checkpoint that doesn't correspond to the end of any log segment
- Both NN and SBN should be able to restart at this point.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2765) TestNameEditsConfigs is incorrectly swallowing IOE

2012-01-06 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2765:
-

Status: Patch Available  (was: Open)

> TestNameEditsConfigs is incorrectly swallowing IOE
> --
>
> Key: HDFS-2765
> URL: https://issues.apache.org/jira/browse/HDFS-2765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2765.patch
>
>
> The final portion of this test case is swallowing an IOE and in so doing 
> appearing to succeed, although it should not be succeeding as-written.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2765) TestNameEditsConfigs is incorrectly swallowing IOE

2012-01-06 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2765:
-

Attachment: HDFS-2765.patch

Here's a patch which addresses the issue. I also took the liberty of cleaning 
up this whole class a little bit while I was in here.

> TestNameEditsConfigs is incorrectly swallowing IOE
> --
>
> Key: HDFS-2765
> URL: https://issues.apache.org/jira/browse/HDFS-2765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2765.patch
>
>
> The final portion of this test case is swallowing an IOE and in so doing 
> appearing to succeed, although it should not be succeeding as-written.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2765) TestNameEditsConfigs is incorrectly swallowing IOE

2012-01-06 Thread Aaron T. Myers (Created) (JIRA)
TestNameEditsConfigs is incorrectly swallowing IOE
--

 Key: HDFS-2765
 URL: https://issues.apache.org/jira/browse/HDFS-2765
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


The final portion of this test case is swallowing an IOE and in so doing 
appearing to succeed, although it should not be succeeding as-written.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-2763) Apply audience and stability annotations to classes in HDFS for 1.x

2012-01-06 Thread Eli Collins (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HDFS-2763:
-

Assignee: (was: Eli Collins)

> Apply audience and stability annotations to classes in HDFS for 1.x
> ---
>
> Key: HDFS-2763
> URL: https://issues.apache.org/jira/browse/HDFS-2763
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom White
>
> Port HADOOP-6668 to branch-1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2699) Store data and checksums together in block file

2012-01-06 Thread Scott Carey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181643#comment-13181643
 ] 

Scott Carey commented on HDFS-2699:
---

@Srivas:
bq. If you want to eventually support random-IO, then a block size of 4096 is 
too large for the CRC, as it will cause a read-modify-write cycle on the entire 
4K. 512-bytes reduces this overhead.

With CRC hardware accelerated now, this is not a big overhead.  Without 
hardware acceleration it is ~800MB/sec for 4096 byte chunks, or 200,000 blocks 
per second or 25% of one CPU load at 200MB/sec writes.  With Hardware 
acceleration this drops by a factor of 4 to 8.  

This is besides the point, a paranoid user could configure smaller CRC chunks 
and test that.  I'm suggesting that 4096 is a much saner default.

@Todd:
bq. Secondly, the disk manufacturers guarantee only a 512-byte atomicity on 
disk. Linux doing a 4K block write guarantees almost nothing wrt atomicity of 
that 4K write to disk. On a crash, unless you are running some sort of RAID or 
data-journal, there is a likelihood of the 4K block that's in-flight getting 
corrupted.

Actually, disk manufacturers are all using 4096 byte atomicity these days 
(starting with 500GB platters for most manufacturers) **.  HDFS should not 
target protecting power_of_two_butes data with a checksum, but rather 
(power_of_two_bytes - checksum_size) data so that the hardware atomicity (and 
OS page cache) lines up exactly with the hdfs checksum chunk + inlined CRC.

@Srivas:
bq. 2. An append happens a few days later to extend the file from 9K to 11K. 
CRC3 is now recomputed for the 3K-sized region spanning offsets 8K-11K and 
written out as CRC3-new. But there is a crash, and the entire 3K is not all 
written out cleanly 

This can be avoided entirely.
A. The OS and Hardware can avoid partial page writes.  ext4 and others can 
avoid partial page writes.  The OS only flushes a page at a time.  Hardware 
these days writes blocks in atomic 4096 byte chunks.
B. The inlined CRC can be done so that a single 4096 byte page in the OS 
contains all of the data and the crc in an atomic chunk, and the CRC and its 
corresponding data are therefore not split across pages.

Under the above conditions, the performance would be excellent, and the data 
safety higher than the current situation or any application level crc (unless 
the application is inlining the crc to prevent splitting the data and crc 
across pages).

About the transition to 4096 byte blocks on Hard drives ("Advanced Format" 
disks):
http://www.zdnet.com/blog/storage/are-you-ready-for-4k-sector-drives/731
http://en.wikipedia.org/wiki/Advanced_Format
http://www.seagate.com/docs/pdf/whitepaper/tp613_transition_to_4k_sectors.pdf
http://lwn.net/Articles/322777/
http://www.anandtech.com/show/2888

> Store data and checksums together in block file
> ---
>
> Key: HDFS-2699
> URL: https://issues.apache.org/jira/browse/HDFS-2699
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
>
> The current implementation of HDFS stores the data in one block file and the 
> metadata(checksum) in another block file. This means that every read from 
> HDFS actually consumes two disk iops, one to the datafile and one to the 
> checksum file. This is a major problem for scaling HBase, because HBase is 
> usually  bottlenecked on the number of random disk iops that the 
> storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2740) Enable the trash feature by default

2012-01-06 Thread T Meyarivan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181638#comment-13181638
 ] 

T Meyarivan commented on HDFS-2740:
---

Re config changes, do  they(for params fs.trash.interval, 
fs.trash.checkpoint.interval) need to be present in :

[a] both the files ? (will create a new bug and apply diff to core-default.xml, 
apply diff to hdfs-default.xml)
[b] only core-default.xml ? (will create a new bug, apply diff to 
core-default.xml, close this bug as 'duplicate')
[c] only hdfs-default.xml ? (will remove entries from core-default.xml via new 
bug, apply diff to hdfs-default.xml)

--

> Enable the trash feature by default
> ---
>
> Key: HDFS-2740
> URL: https://issues.apache.org/jira/browse/HDFS-2740
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs client, name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>  Labels: newbie
> Attachments: hdfs-2740.patch
>
>
> Currently trash is disabled out of box. I do not think it'd be of high 
> surprise to anyone (but surely a relief when *hit happens) to have trash 
> enabled by default, with the usually recommended periods of 1-day.
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2725) hdfs script usage information is missing the information about "dfs" command

2012-01-06 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181612#comment-13181612
 ] 

Hadoop QA commented on HDFS-2725:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12508814/HDFS-2725.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 21 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

-1 release audit.  The applied patch generated 1 release audit warnings 
(more than the trunk's current 0 warnings).

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.server.common.TestDistributedUpgrade

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1763//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1763//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1763//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1763//console

This message is automatically generated.

> hdfs script usage information is missing the information about "dfs" command
> 
>
> Key: HDFS-2725
> URL: https://issues.apache.org/jira/browse/HDFS-2725
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0
>Reporter: Prashant Sharma
>  Labels: hdfs
> Fix For: 0.24.0
>
> Attachments: HDFS-2725.patch
>
>
> hdfs script does print the command "dfs" in the usage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2764) HA: TestBackupNode is failing

2012-01-06 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181611#comment-13181611
 ] 

Aaron T. Myers commented on HDFS-2764:
--

Here's the error:

{noformat}
Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 63.726 sec <<< 
FAILURE!

Results :

Failed tests:   
testCheckpointNode(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): 
Expected non-empty 
/home/atm/src/apache/hadoop.git/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/fsimage_003
  testBackupNode(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): 
Expected non-empty 
/home/atm/src/apache/hadoop.git/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/fsimage_003

Tests run: 3, Failures: 2, Errors: 0, Skipped: 0
{noformat}

This might be related to HDFS-2762.

> HA: TestBackupNode is failing
> -
>
> Key: HDFS-2764
> URL: https://issues.apache.org/jira/browse/HDFS-2764
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>
> Looks like it has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2764) HA: TestBackupNode is failing

2012-01-06 Thread Aaron T. Myers (Created) (JIRA)
HA: TestBackupNode is failing
-

 Key: HDFS-2764
 URL: https://issues.apache.org/jira/browse/HDFS-2764
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Aaron T. Myers


Looks like it has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-1314:
--

Target Version/s: 0.23.1  (was: 0.24.0)

Good candidate for merging to branch 23.

> dfs.blocksize accepts only absolute value
> -
>
> Key: HDFS-1314
> URL: https://issues.apache.org/jira/browse/HDFS-1314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Karim Saadah
>Assignee: Sho Shimauchi
>Priority: Minor
>  Labels: newbie
> Fix For: 0.24.0
>
> Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
> hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt
>
>
> Using "dfs.block.size=8388608" works 
> but "dfs.block.size=8mb" does not.
> Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException.
> (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-46) The namespace quota of root directory should not be Integer.MAX_VALUE

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-46?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-46:


 Component/s: name-node
Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> The namespace quota of root directory should not be Integer.MAX_VALUE
> -
>
> Key: HDFS-46
> URL: https://issues.apache.org/jira/browse/HDFS-46
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Uma Maheswara Rao G
>  Labels: newbie
> Fix For: 0.24.0
>
> Attachments: HDFS-46.1.patch, HDFS-46.patch
>
>
> The default namespace quota of root directory is set to Integer.MAX_VALUE but 
> the data type of INodeDirectoryWithQuota.nsQuota is long.  So default quota  
> probably should be Long.MAX_VALUE or some other value like 1,000,000,000.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2337) DFSClient shouldn't keep multiple RPC proxy references

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2337:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> DFSClient shouldn't keep multiple RPC proxy references
> --
>
> Key: HDFS-2337
> URL: https://issues.apache.org/jira/browse/HDFS-2337
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 0.24.0
>
> Attachments: hdfs-2337.patch
>
>
> With the commit of HADOOP-7635, {{RetryInvocationHandler}} objects will clean 
> up the underlying {{InvocationHandler}} objects they reference. We should 
> change {{DFSClient}} to take advantage of this fact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2726:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> "Exception in createBlockOutputStream" shouldn't delete exception stack trace
> -
>
> Key: HDFS-2726
> URL: https://issues.apache.org/jira/browse/HDFS-2726
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Michael Bieniosek
>Assignee: Harsh J
> Fix For: 0.24.0
>
> Attachments: HDFS-2726.patch
>
>
> I'm occasionally (1/5000 times) getting this error after upgrading everything 
> to hadoop-0.18:
> 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
> blk_624229997631234952_8205908
> DFSClient contains the logging code:
> LOG.info("Exception in createBlockOutputStream " + ie);
> This would be better written with ie as the second argument to LOG.info, so 
> that the stack trace could be preserved.  As it is, I don't know how to start 
> debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-554:
-

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2650) Replace @inheritDoc with @Override

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2650:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> Replace @inheritDoc with @Override 
> ---
>
> Key: HDFS-2650
> URL: https://issues.apache.org/jira/browse/HDFS-2650
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hari Mankude
>Assignee: Hari Mankude
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2650-2.txt, HDFS-2650.txt, hdfs-2650
>
>
> @Override provides both javadoc from superclass and compile time detection of 
> overridden method deletion from superclass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2729:
--

Target Version/s: 0.23.1  (was: 0.24.0)

Good candidate for merging to branch 23.

> Update BlockManager's comments regarding the invalid block set
> --
>
> Key: HDFS-2729
> URL: https://issues.apache.org/jira/browse/HDFS-2729
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2729.patch
>
>
> Looks like after HDFS-82 was covered at some point, the comments and logs 
> still carry presence of two sets when there really is just one set.
> This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2564) Cleanup unnecessary exceptions thrown and unnecessary casts

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2564:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> Cleanup unnecessary exceptions thrown and unnecessary casts
> ---
>
> Key: HDFS-2564
> URL: https://issues.apache.org/jira/browse/HDFS-2564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node, hdfs client, name-node
>Affects Versions: 0.24.0
>Reporter: Hari Mankude
>Assignee: Hari Mankude
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2564-1.txt, hadoop-2564.trunk.patch, 
> hadoop-2564.trunk.patch, hadoop-2564.trunk.patch
>
>
> Cleaning up some of the java files with unnecessary exceptions and casts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2572) Unnecessary double-check in DN#getHostName

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2572:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> Unnecessary double-check in DN#getHostName
> --
>
> Key: HDFS-2572
> URL: https://issues.apache.org/jira/browse/HDFS-2572
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HDFS-2572.patch, HDFS-2572.patch
>
>
> We do a double config.get unnecessarily inside DN#getHostName(...). Can be 
> removed by this patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2349:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> DN should log a WARN, not an INFO when it detects a corruption during block 
> transfer
> 
>
> Key: HDFS-2349
> URL: https://issues.apache.org/jira/browse/HDFS-2349
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.20.204.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HDFS-2349.diff
>
>
> Currently, in DataNode.java, we have:
> {code}
>   LOG.info("Can't replicate block " + block
>   + " because on-disk length " + onDiskLength 
>   + " is shorter than NameNode recorded length " + 
> block.getNumBytes());
> {code}
> This log is better off as a WARN as it indicates (and also reports) a 
> corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2410) Further clean up hard-coded configuration keys

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2410:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> Further clean up hard-coded configuration keys
> --
>
> Key: HDFS-2410
> URL: https://issues.apache.org/jira/browse/HDFS-2410
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node, name-node, test
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2410.txt, HDFS-2410.txt
>
>
> HDFS code is littered with hardcoded config key names. This jira changes to 
> use DFSConfigKeys constants.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2373) Commands using webhdfs and hftp print unnecessary debug information on the console with security enabled

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2373:
--

Target Version/s: 0.23.1

Good candidate for merging to branch 23.

> Commands using webhdfs and hftp print unnecessary debug information on the 
> console with security enabled
> 
>
> Key: HDFS-2373
> URL: https://issues.apache.org/jira/browse/HDFS-2373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.24.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 0.20.205.0, 0.24.0
>
> Attachments: HDFS-2373.20s-1.patch, HDFS-2373.20s.patch, 
> HDFS-2373.patch, HDFS-2373.patch
>
>
> run an hdfs command using either hftp or webhdfs and it prints the following 
> line to the console (system out)
> Retrieving token from: https://NN_HOST:50470/getDelegationToken
> Probably in the code where we get the delegation token. This should be 
> removed as people using the dfs commands to get a handle to the content such 
> as dfs -cat will now get an extra line that is not part of the actual 
> content. This should either be only in the log or not logged at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2763) Apply audience and stability annotations to classes in HDFS for 1.x

2012-01-06 Thread Tom White (Created) (JIRA)
Apply audience and stability annotations to classes in HDFS for 1.x
---

 Key: HDFS-2763
 URL: https://issues.apache.org/jira/browse/HDFS-2763
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Tom White


Port HADOOP-6668 to branch-1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-2763) Apply audience and stability annotations to classes in HDFS for 1.x

2012-01-06 Thread Tom White (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White reassigned HDFS-2763:
---

Assignee: Eli Collins

> Apply audience and stability annotations to classes in HDFS for 1.x
> ---
>
> Key: HDFS-2763
> URL: https://issues.apache.org/jira/browse/HDFS-2763
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom White
>Assignee: Eli Collins
>
> Port HADOOP-6668 to branch-1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2753) Standby namenode stuck in safemode during a failover

2012-01-06 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2753:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-1623

> Standby namenode stuck in safemode during a failover
> 
>
> Key: HDFS-2753
> URL: https://issues.apache.org/jira/browse/HDFS-2753
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2753.patch
>
>
> Write traffic initiated from the client. Manual failover is done by killing 
> NN and converting a  different standby to active. NN is restarted as standby. 
> The restarted standby stays in safemode forever. More information in the 
> description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2709) HA: Appropriately handle error conditions in EditLogTailer

2012-01-06 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2709.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

All the HA tests passed. Committed to branch, thanks atm.

> HA: Appropriately handle error conditions in EditLogTailer
> --
>
> Key: HDFS-2709
> URL: https://issues.apache.org/jira/browse/HDFS-2709
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Aaron T. Myers
>Priority: Critical
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch
>
>
> Currently if the edit log tailer experiences an error replaying edits in the 
> middle of a file, it will go back to retrying from the beginning of the file 
> on the next tailing iteration. This is incorrect since many of the edits will 
> have already been replayed, and not all edits are idempotent.
> Instead, we either need to (a) support reading from the middle of a finalized 
> file (ie skip those edits already applied), or (b) abort the standby if it 
> hits an error while tailing. If "a" isn't simple, let's do "b" for now and 
> come back to 'a' later since this is a rare circumstance and better to abort 
> than be incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2709) HA: Appropriately handle error conditions in EditLogTailer

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181575#comment-13181575
 ] 

Todd Lipcon commented on HDFS-2709:
---

+1, looks good to me. Will commit momentarily after verifying the HA tests 
still pass on my machine.

> HA: Appropriately handle error conditions in EditLogTailer
> --
>
> Key: HDFS-2709
> URL: https://issues.apache.org/jira/browse/HDFS-2709
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Aaron T. Myers
>Priority: Critical
> Attachments: HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, 
> HDFS-2709-HDFS-1623.patch
>
>
> Currently if the edit log tailer experiences an error replaying edits in the 
> middle of a file, it will go back to retrying from the beginning of the file 
> on the next tailing iteration. This is incorrect since many of the edits will 
> have already been replayed, and not all edits are idempotent.
> Instead, we either need to (a) support reading from the middle of a finalized 
> file (ie skip those edits already applied), or (b) abort the standby if it 
> hits an error while tailing. If "a" isn't simple, let's do "b" for now and 
> come back to 'a' later since this is a rare circumstance and better to abort 
> than be incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-2762) TestCheckpoint is timing out

2012-01-06 Thread Todd Lipcon (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HDFS-2762:
-

Assignee: Uma Maheswara Rao G  (was: Todd Lipcon)

> TestCheckpoint is timing out
> 
>
> Key: HDFS-2762
> URL: https://issues.apache.org/jira/browse/HDFS-2762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>Assignee: Uma Maheswara Rao G
>
> TestCheckpoint is timing out on the HA branch, and has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2762) TestCheckpoint is timing out

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181574#comment-13181574
 ] 

Todd Lipcon commented on HDFS-2762:
---

It seems like the {{testMultipleSecondaryNameNodes}} test is the one that's 
timing out. I traced this to the commit of HDFS-2720. Uma, can you look into 
this? If we can't figure it out in the next day or so I'll revert HDFS-2720 on 
the branch to fix the tests.

> TestCheckpoint is timing out
> 
>
> Key: HDFS-2762
> URL: https://issues.apache.org/jira/browse/HDFS-2762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>Assignee: Todd Lipcon
>
> TestCheckpoint is timing out on the HA branch, and has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2762) TestCheckpoint is timing out

2012-01-06 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2762:
--

Component/s: ha

> TestCheckpoint is timing out
> 
>
> Key: HDFS-2762
> URL: https://issues.apache.org/jira/browse/HDFS-2762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>Assignee: Todd Lipcon
>
> TestCheckpoint is timing out on the HA branch, and has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2762) TestCheckpoint is timing out

2012-01-06 Thread Aaron T. Myers (Created) (JIRA)
TestCheckpoint is timing out


 Key: HDFS-2762
 URL: https://issues.apache.org/jira/browse/HDFS-2762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Aaron T. Myers
Assignee: Todd Lipcon


TestCheckpoint is timing out on the HA branch, and has been for a few days.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2740) Enable the trash feature by default

2012-01-06 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181571#comment-13181571
 ] 

Hadoop QA commented on HDFS-2740:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12509711/hdfs-2740.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1764//console

This message is automatically generated.

> Enable the trash feature by default
> ---
>
> Key: HDFS-2740
> URL: https://issues.apache.org/jira/browse/HDFS-2740
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs client, name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>  Labels: newbie
> Attachments: hdfs-2740.patch
>
>
> Currently trash is disabled out of box. I do not think it'd be of high 
> surprise to anyone (but surely a relief when *hit happens) to have trash 
> enabled by default, with the usually recommended periods of 1-day.
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2740) Enable the trash feature by default

2012-01-06 Thread T Meyarivan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Meyarivan updated HDFS-2740:
--

Attachment: hdfs-2740.patch

> Enable the trash feature by default
> ---
>
> Key: HDFS-2740
> URL: https://issues.apache.org/jira/browse/HDFS-2740
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs client, name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>  Labels: newbie
> Attachments: hdfs-2740.patch
>
>
> Currently trash is disabled out of box. I do not think it'd be of high 
> surprise to anyone (but surely a relief when *hit happens) to have trash 
> enabled by default, with the usually recommended periods of 1-day.
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2740) Enable the trash feature by default

2012-01-06 Thread T Meyarivan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Meyarivan updated HDFS-2740:
--

Status: Patch Available  (was: Open)

Patch introduces following changes:

[1] Create new snapshot every ~360 minutes (6 hours)
[2] Retain deleted data for ~1440 minutes (24 hours)


> Enable the trash feature by default
> ---
>
> Key: HDFS-2740
> URL: https://issues.apache.org/jira/browse/HDFS-2740
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs client, name-node
>Affects Versions: 0.23.0
>Reporter: Harsh J
>  Labels: newbie
>
> Currently trash is disabled out of box. I do not think it'd be of high 
> surprise to anyone (but surely a relief when *hit happens) to have trash 
> enabled by default, with the usually recommended periods of 1-day.
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2179) HA: namenode fencing mechanism

2012-01-06 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2179:
--

Issue Type: New Feature  (was: Sub-task)
Parent: (was: HDFS-1623)

> HA: namenode fencing mechanism
> --
>
> Key: HDFS-2179
> URL: https://issues.apache.org/jira/browse/HDFS-2179
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2179.txt, hdfs-2179.txt, hdfs-2179.txt
>
>
> In an HA cluster, when there are two NNs, the invariant that only one NN is 
> active at a time has to be preserved in order to prevent "split brain 
> syndrome." Thus, when a standby NN is transition to "active" state during a 
> failover, it needs to somehow _fence_ the formerly active NN to ensure that 
> it can no longer perform edits. This JIRA is to discuss and implement NN 
> fencing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2179) HA: namenode fencing mechanism

2012-01-06 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181562#comment-13181562
 ] 

Eli Collins commented on HDFS-2179:
---

Agree. Spoke to Todd, am going to move this to common for HADOOP-7938 (there's 
minimal HDFS dependency).

> HA: namenode fencing mechanism
> --
>
> Key: HDFS-2179
> URL: https://issues.apache.org/jira/browse/HDFS-2179
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2179.txt, hdfs-2179.txt, hdfs-2179.txt
>
>
> In an HA cluster, when there are two NNs, the invariant that only one NN is 
> active at a time has to be preserved in order to prevent "split brain 
> syndrome." Thus, when a standby NN is transition to "active" state during a 
> failover, it needs to somehow _fence_ the formerly active NN to ensure that 
> it can no longer perform edits. This JIRA is to discuss and implement NN 
> fencing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2499) Fix RPC client creation bug from HDFS-2459

2012-01-06 Thread Jitendra Nath Pandey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-2499:
---

Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-2060

> Fix RPC client creation bug from HDFS-2459
> --
>
> Key: HDFS-2499
> URL: https://issues.apache.org/jira/browse/HDFS-2499
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.24.0
>
> Attachments: HDFS-2499.txt, HDFS-2499.txt
>
>
> HDFS-2459 incorrectly implemented the RPC getProxy for the JournalProtocol 
> client side. It sets retry policies and other policies that are not necessary

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2687) Tests are failing with ClassCastException, due to new protocol changes

2012-01-06 Thread Jitendra Nath Pandey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-2687:
---

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-2060

> Tests are failing with ClassCastException, due to new protocol changes 
> ---
>
> Key: HDFS-2687
> URL: https://issues.apache.org/jira/browse/HDFS-2687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Suresh Srinivas
> Attachments: HDFS-2687.txt
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/lastCompletedBuild/testReport/
> java.lang.ClassCastException: org.apache.hadoop.hdfs.protocol.HdfsFileStatus 
> cannot be cast to org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$1.hasNext(DistributedFileSystem.java:452)
>   at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:1551)
>   at org.apache.hadoop.fs.FileSystem$5.next(FileSystem.java:1581)
>   at org.apache.hadoop.fs.FileSystem$5.next(FileSystem.java:1541)
>   at 
> org.apache.hadoop.fs.TestListFiles.testDirectory(TestListFiles.java:146)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2753) Standby namenode stuck in safemode during a failover

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181521#comment-13181521
 ] 

Todd Lipcon commented on HDFS-2753:
---

This looks reasonable. Can you add a regression test to TestHASafeMode to 
demonstrate the problem before we commit it?

> Standby namenode stuck in safemode during a failover
> 
>
> Key: HDFS-2753
> URL: https://issues.apache.org/jira/browse/HDFS-2753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2753.patch
>
>
> Write traffic initiated from the client. Manual failover is done by killing 
> NN and converting a  different standby to active. NN is restarted as standby. 
> The restarted standby stays in safemode forever. More information in the 
> description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2756) Warm standby does not read the in_progress edit log

2012-01-06 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181517#comment-13181517
 ] 

Todd Lipcon commented on HDFS-2756:
---

The old log files are purged when checkpoints are saved - see 
NNStorageRetentionManager

> Warm standby does not read the in_progress edit log 
> 
>
> Key: HDFS-2756
> URL: https://issues.apache.org/jira/browse/HDFS-2756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>
> Warm standby does not read the in_progress edit log. This could result in 
> standby taking a long time to become the primary during a failover scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2761) Improve Hadoop subcomponent integration in Hadoop 0.23

2012-01-06 Thread Roman Shaposhnik (Created) (JIRA)
Improve Hadoop subcomponent integration in Hadoop 0.23
--

 Key: HDFS-2761
 URL: https://issues.apache.org/jira/browse/HDFS-2761
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build, hdfs client, scripts
Affects Versions: 0.23.0
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 0.23.1


Please see HADOOP-7939 for a complete description and discussion. This JIRA is 
for patch tracking purposes only.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2753) Standby namenode stuck in safemode during a failover

2012-01-06 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181509#comment-13181509
 ] 

Hari Mankude commented on HDFS-2753:


Todd/Aaron, 

Can you review the patch? I have tested the patch with the failing config and 
it works.

> Standby namenode stuck in safemode during a failover
> 
>
> Key: HDFS-2753
> URL: https://issues.apache.org/jira/browse/HDFS-2753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2753.patch
>
>
> Write traffic initiated from the client. Manual failover is done by killing 
> NN and converting a  different standby to active. NN is restarted as standby. 
> The restarted standby stays in safemode forever. More information in the 
> description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2753) Standby namenode stuck in safemode during a failover

2012-01-06 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181507#comment-13181507
 ] 

Hadoop QA commented on HDFS-2753:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12509695/HDFS-2753.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 21 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

-1 release audit.  The applied patch generated 1 release audit warnings 
(more than the trunk's current 0 warnings).

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1762//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1762//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1762//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1762//console

This message is automatically generated.

> Standby namenode stuck in safemode during a failover
> 
>
> Key: HDFS-2753
> URL: https://issues.apache.org/jira/browse/HDFS-2753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2753.patch
>
>
> Write traffic initiated from the client. Manual failover is done by killing 
> NN and converting a  different standby to active. NN is restarted as standby. 
> The restarted standby stays in safemode forever. More information in the 
> description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2756) Warm standby does not read the in_progress edit log

2012-01-06 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181503#comment-13181503
 ] 

Hari Mankude commented on HDFS-2756:


Finalizing edit logs frequently could result in large number of edit log files 
in the shared edits dir. Finalizing edit log file should probably be coupled 
with either purging or mechanisms for archiving/coalesing older logfiles.

> Warm standby does not read the in_progress edit log 
> 
>
> Key: HDFS-2756
> URL: https://issues.apache.org/jira/browse/HDFS-2756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>
> Warm standby does not read the in_progress edit log. This could result in 
> standby taking a long time to become the primary during a failover scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2753) Standby namenode stuck in safemode during a failover

2012-01-06 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181467#comment-13181467
 ] 

Hari Mankude commented on HDFS-2753:


A similar fix has been done for hdfs-2309. The hdfs-2309 patch for a different 
problem addresses this HA issue also.

> Standby namenode stuck in safemode during a failover
> 
>
> Key: HDFS-2753
> URL: https://issues.apache.org/jira/browse/HDFS-2753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2753.patch
>
>
> Write traffic initiated from the client. Manual failover is done by killing 
> NN and converting a  different standby to active. NN is restarted as standby. 
> The restarted standby stays in safemode forever. More information in the 
> description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2753) Standby namenode stuck in safemode during a failover

2012-01-06 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-2753:
---

Status: Patch Available  (was: Open)

> Standby namenode stuck in safemode during a failover
> 
>
> Key: HDFS-2753
> URL: https://issues.apache.org/jira/browse/HDFS-2753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2753.patch
>
>
> Write traffic initiated from the client. Manual failover is done by killing 
> NN and converting a  different standby to active. NN is restarted as standby. 
> The restarted standby stays in safemode forever. More information in the 
> description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2753) Standby namenode stuck in safemode during a failover

2012-01-06 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-2753:
---

Attachment: HDFS-2753.patch

> Standby namenode stuck in safemode during a failover
> 
>
> Key: HDFS-2753
> URL: https://issues.apache.org/jira/browse/HDFS-2753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2753.patch
>
>
> Write traffic initiated from the client. Manual failover is done by killing 
> NN and converting a  different standby to active. NN is restarted as standby. 
> The restarted standby stays in safemode forever. More information in the 
> description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2760) HDFS notification

2012-01-06 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181385#comment-13181385
 ] 

Robert Joseph Evans commented on HDFS-2760:
---

Sorry HADOOP-1742 was a typo it is HDFS-1742.

> HDFS notification 
> --
>
> Key: HDFS-2760
> URL: https://issues.apache.org/jira/browse/HDFS-2760
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.20.2
>Reporter: Denny Ye
>Priority: Minor
>  Labels: HDFS
>
> Client would like to receive the interested namespace operation at HDFS, 
> likes 'rename', 'block allocation' and so on. NameNode should support the 
> changing notification to client with appropriate way. 
> In my opinion, I suggest the notification should be applied between NameNode 
> RPC server and actual NameNode object (it's proxy for NameNode object).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2760) HDFS notification

2012-01-06 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181382#comment-13181382
 ] 

Robert Joseph Evans commented on HDFS-2760:
---

We are looking at adding in SubPub notification support to HADOOP.  To 
MapReduce for job end notification, and to HDFS for these types of events. The 
work is still very preliminary HADOOP-7821, but I think this is a duplicate of 
HDFS-1742, which is a subtask of HADOOP-7821 to add in these type of events.  
If HADOOP-1742 would fit your needs then please dupe this JIRA to it. 

> HDFS notification 
> --
>
> Key: HDFS-2760
> URL: https://issues.apache.org/jira/browse/HDFS-2760
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.20.2
>Reporter: Denny Ye
>Priority: Minor
>  Labels: HDFS
>
> Client would like to receive the interested namespace operation at HDFS, 
> likes 'rename', 'block allocation' and so on. NameNode should support the 
> changing notification to client with appropriate way. 
> In my opinion, I suggest the notification should be applied between NameNode 
> RPC server and actual NameNode object (it's proxy for NameNode object).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2760) HDFS notification

2012-01-06 Thread Denny Ye (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181344#comment-13181344
 ] 

Denny Ye commented on HDFS-2760:


I took this demo in local. NameNode proxy object take the role for actual 
NameNode at NN RPC server. It only record and transfer the method invocation 
request to actual NameNode object, and does not need additional RPC call. 
You said right, but it's not a good solution for all clients that they need 
namespace notification and handle log parser respectively. Likewise, same 
function appear at Linux kernel.

> HDFS notification 
> --
>
> Key: HDFS-2760
> URL: https://issues.apache.org/jira/browse/HDFS-2760
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.20.2
>Reporter: Denny Ye
>Priority: Minor
>  Labels: HDFS
>
> Client would like to receive the interested namespace operation at HDFS, 
> likes 'rename', 'block allocation' and so on. NameNode should support the 
> changing notification to client with appropriate way. 
> In my opinion, I suggest the notification should be applied between NameNode 
> RPC server and actual NameNode object (it's proxy for NameNode object).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2760) HDFS notification

2012-01-06 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181337#comment-13181337
 ] 

Harsh J commented on HDFS-2760:
---

Ok. For now you can also tail -f the NameNode's audit logs into your 'client', 
and parse out these actions -- should be near realtime and should not need 
additional RPC calls per op.

> HDFS notification 
> --
>
> Key: HDFS-2760
> URL: https://issues.apache.org/jira/browse/HDFS-2760
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.20.2
>Reporter: Denny Ye
>Priority: Minor
>  Labels: HDFS
>
> Client would like to receive the interested namespace operation at HDFS, 
> likes 'rename', 'block allocation' and so on. NameNode should support the 
> changing notification to client with appropriate way. 
> In my opinion, I suggest the notification should be applied between NameNode 
> RPC server and actual NameNode object (it's proxy for NameNode object).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2760) HDFS notification

2012-01-06 Thread Denny Ye (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181328#comment-13181328
 ] 

Denny Ye commented on HDFS-2760:


hi Harsh. I want the HDFS notifing file/folder/block event(create path, block 
allocation, ...) to client for more purposes. NameNode RPC cannot react with 
the method invocation on NameNode object

> HDFS notification 
> --
>
> Key: HDFS-2760
> URL: https://issues.apache.org/jira/browse/HDFS-2760
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.20.2
>Reporter: Denny Ye
>Priority: Minor
>  Labels: HDFS
>
> Client would like to receive the interested namespace operation at HDFS, 
> likes 'rename', 'block allocation' and so on. NameNode should support the 
> changing notification to client with appropriate way. 
> In my opinion, I suggest the notification should be applied between NameNode 
> RPC server and actual NameNode object (it's proxy for NameNode object).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2697) Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings protocol to protocol buffers

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181309#comment-13181309
 ] 

Hudson commented on HDFS-2697:
--

Integrated in Hadoop-Mapreduce-trunk #950 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/950/])
HDFS-2697. Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings 
protocol to protocol buffers.

jitendra : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227887
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetGroups.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/GetUserMappingsProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/RefreshAuthorizationPolicyProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/RefreshUserMappingsProtocol.proto


> Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings protocol to 
> protocol buffers
> -
>
> Key: HDFS-2697
> URL: https://issues.apache.org/jira/browse/HDFS-2697
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
>Assignee: Jitendra Nath Pandey
> Fix For: 0.24.0
>
> Attachments: HDFS-2697.trunk.patch, HDFS-2697.trunk.patch, 
> HDFS-2697.trunk.patch, HDFS-2697.trunk.patch
>
>
> Protocol buffer implementation for RefreshAuthPolicy, RefreshUserMappings, 
> GetUserMappings protocols, includes PB service definitions, translator 
> implementations and changes to DFSAdmin and NamenodeRpcServer to use protocol 
> buffer implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2710) HDFS part of MAPREDUCE-3529, HADOOP-7933

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181304#comment-13181304
 ] 

Hudson commented on HDFS-2710:
--

Integrated in Hadoop-Mapreduce-trunk #950 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/950/])
HDFS-2710. Add HDFS tests related to HADOOP-7933. Contributed by Siddarth 
Seth.

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227756
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java


> HDFS part of MAPREDUCE-3529, HADOOP-7933
> 
>
> Key: HDFS-2710
> URL: https://issues.apache.org/jira/browse/HDFS-2710
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Siddharth Seth
>Priority: Critical
> Fix For: 0.24.0, 0.23.1
>
> Attachments: HDFS2710_v1.txt, HDFS2710_v1.txt
>
>
> viewfs implementation of getDelegationTokens(String, Credentials)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2710) HDFS part of MAPREDUCE-3529, HADOOP-7933

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181266#comment-13181266
 ] 

Hudson commented on HDFS-2710:
--

Integrated in Hadoop-Hdfs-trunk #917 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/917/])
HDFS-2710. Add HDFS tests related to HADOOP-7933. Contributed by Siddarth 
Seth.

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227756
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java


> HDFS part of MAPREDUCE-3529, HADOOP-7933
> 
>
> Key: HDFS-2710
> URL: https://issues.apache.org/jira/browse/HDFS-2710
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Siddharth Seth
>Priority: Critical
> Fix For: 0.24.0, 0.23.1
>
> Attachments: HDFS2710_v1.txt, HDFS2710_v1.txt
>
>
> viewfs implementation of getDelegationTokens(String, Credentials)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2697) Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings protocol to protocol buffers

2012-01-06 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181271#comment-13181271
 ] 

Hudson commented on HDFS-2697:
--

Integrated in Hadoop-Hdfs-trunk #917 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/917/])
HDFS-2697. Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings 
protocol to protocol buffers.

jitendra : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227887
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/GetUserMappingsProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshAuthorizationPolicyProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/RefreshUserMappingsProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetGroups.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/GetUserMappingsProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/RefreshAuthorizationPolicyProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/RefreshUserMappingsProtocol.proto


> Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings protocol to 
> protocol buffers
> -
>
> Key: HDFS-2697
> URL: https://issues.apache.org/jira/browse/HDFS-2697
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
>Assignee: Jitendra Nath Pandey
> Fix For: 0.24.0
>
> Attachments: HDFS-2697.trunk.patch, HDFS-2697.trunk.patch, 
> HDFS-2697.trunk.patch, HDFS-2697.trunk.patch
>
>
> Protocol buffer implementation for RefreshAuthPolicy, RefreshUserMappings, 
> GetUserMappings protocols, includes PB service definitions, translator 
> implementations and changes to DFSAdmin and NamenodeRpcServer to use protocol 
> buffer implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2760) HDFS notification

2012-01-06 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181251#comment-13181251
 ] 

Harsh J commented on HDFS-2760:
---

Denny,

Are you looking for something like NameNode RPC plugins, like the ones Hue uses 
via HDFS-460?

> HDFS notification 
> --
>
> Key: HDFS-2760
> URL: https://issues.apache.org/jira/browse/HDFS-2760
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.20.2
>Reporter: Denny Ye
>Priority: Minor
>  Labels: HDFS
>
> Client would like to receive the interested namespace operation at HDFS, 
> likes 'rename', 'block allocation' and so on. NameNode should support the 
> changing notification to client with appropriate way. 
> In my opinion, I suggest the notification should be applied between NameNode 
> RPC server and actual NameNode object (it's proxy for NameNode object).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >