[jira] [Commented] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.
[ https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123425#comment-13123425 ] Steve Loughran commented on HDFS-2205: -- +1 Log message for failed connection to datanode is not followed by a success message. --- Key: HDFS-2205 URL: https://issues.apache.org/jira/browse/HDFS-2205 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs client Affects Versions: 0.23.0 Reporter: Ravi Prakash Assignee: Ravi Prakash Fix For: 0.23.0 Attachments: HDFS-2205.patch, HDFS-2205.patch, HDFS-2205.patch To avoid confusing users on whether their HDFS operation was succesful or not, a success message should be printed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.
[ https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123427#comment-13123427 ] Steve Loughran commented on HDFS-2205: -- committed! Thanks! Log message for failed connection to datanode is not followed by a success message. --- Key: HDFS-2205 URL: https://issues.apache.org/jira/browse/HDFS-2205 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs client Affects Versions: 0.23.0 Reporter: Ravi Prakash Assignee: Ravi Prakash Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2205.patch, HDFS-2205.patch, HDFS-2205.patch To avoid confusing users on whether their HDFS operation was succesful or not, a success message should be printed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2205) Log message for failed connection to datanode is not followed by a success message.
[ https://issues.apache.org/jira/browse/HDFS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-2205: - Resolution: Fixed Fix Version/s: 0.24.0 Status: Resolved (was: Patch Available) Log message for failed connection to datanode is not followed by a success message. --- Key: HDFS-2205 URL: https://issues.apache.org/jira/browse/HDFS-2205 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs client Affects Versions: 0.23.0 Reporter: Ravi Prakash Assignee: Ravi Prakash Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2205.patch, HDFS-2205.patch, HDFS-2205.patch To avoid confusing users on whether their HDFS operation was succesful or not, a success message should be printed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps
[ https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123428#comment-13123428 ] Hudson commented on HDFS-2209: -- Integrated in Hadoop-Common-trunk-Commit #1044 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1044/]) HDFS-2209 datanode connection failure logging stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180353 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java Make MiniDFS easier to embed in other apps -- Key: HDFS-2209 URL: https://issues.apache.org/jira/browse/HDFS-2209 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 0.20.203.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch Original Estimate: 1h Time Spent: 1.5h Remaining Estimate: 2h I've been deploying MiniDFSCluster for some testing, and while using it/looking through the code I made some notes of where there are issues and improvement opportunities. This is mostly minor as its a test tool, but a risk of synchronization problems is there and does need addressing; the rest are all feature creep. Field {{nameNode}} should be marked as volatile as the shutdown operation can be in a different thread than startup. Best of all, add synchronized methods to set and get the field, as well as shutdown. The data dir is set from from System Properties. {code} base_dir = new File(System.getProperty(test.build.data, build/test/data), dfs/); data_dir = new File(base_dir, data); {code} This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and the constructor. Improvement: have a test property in the conf file, and only read the system property if this is unset. This will enable multiple MiniDFSClusters to come up in the same JVM, and handle shutdown/startup race conditions better, and avoid the java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The directory is already locked. messages Messages should log to the commons logging and not {{System.err}} and {{System.out}}. This enables containers to catch and stream better, and include more diagnostics such as timestamp and thread Id Class could benefit from a method to return the FS URI, rather than just the FS. This currently has to be worked around with some tricks involving a cached configuration {{waitActive()}} could get confused if localhost maps to an IPv6 address. Better to ask for 127.0.0.1 as the hostname; Junit test runs may need to be set up to force in IPv4 too. {{injectBlocks}} has a spelling error in the IOException, SumulatedFSDataset is the correct spelling -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps
[ https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123429#comment-13123429 ] Hudson commented on HDFS-2209: -- Integrated in Hadoop-Hdfs-trunk-Commit #1122 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1122/]) HDFS-2209 datanode connection failure logging stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180353 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java Make MiniDFS easier to embed in other apps -- Key: HDFS-2209 URL: https://issues.apache.org/jira/browse/HDFS-2209 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 0.20.203.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch Original Estimate: 1h Time Spent: 1.5h Remaining Estimate: 2h I've been deploying MiniDFSCluster for some testing, and while using it/looking through the code I made some notes of where there are issues and improvement opportunities. This is mostly minor as its a test tool, but a risk of synchronization problems is there and does need addressing; the rest are all feature creep. Field {{nameNode}} should be marked as volatile as the shutdown operation can be in a different thread than startup. Best of all, add synchronized methods to set and get the field, as well as shutdown. The data dir is set from from System Properties. {code} base_dir = new File(System.getProperty(test.build.data, build/test/data), dfs/); data_dir = new File(base_dir, data); {code} This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and the constructor. Improvement: have a test property in the conf file, and only read the system property if this is unset. This will enable multiple MiniDFSClusters to come up in the same JVM, and handle shutdown/startup race conditions better, and avoid the java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The directory is already locked. messages Messages should log to the commons logging and not {{System.err}} and {{System.out}}. This enables containers to catch and stream better, and include more diagnostics such as timestamp and thread Id Class could benefit from a method to return the FS URI, rather than just the FS. This currently has to be worked around with some tricks involving a cached configuration {{waitActive()}} could get confused if localhost maps to an IPv6 address. Better to ask for 127.0.0.1 as the hostname; Junit test runs may need to be set up to force in IPv4 too. {{injectBlocks}} has a spelling error in the IOException, SumulatedFSDataset is the correct spelling -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps
[ https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123443#comment-13123443 ] Hudson commented on HDFS-2209: -- Integrated in Hadoop-Mapreduce-trunk-Commit #1064 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1064/]) HDFS-2209 datanode connection failure logging stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180353 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java Make MiniDFS easier to embed in other apps -- Key: HDFS-2209 URL: https://issues.apache.org/jira/browse/HDFS-2209 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 0.20.203.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch Original Estimate: 1h Time Spent: 1.5h Remaining Estimate: 2h I've been deploying MiniDFSCluster for some testing, and while using it/looking through the code I made some notes of where there are issues and improvement opportunities. This is mostly minor as its a test tool, but a risk of synchronization problems is there and does need addressing; the rest are all feature creep. Field {{nameNode}} should be marked as volatile as the shutdown operation can be in a different thread than startup. Best of all, add synchronized methods to set and get the field, as well as shutdown. The data dir is set from from System Properties. {code} base_dir = new File(System.getProperty(test.build.data, build/test/data), dfs/); data_dir = new File(base_dir, data); {code} This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and the constructor. Improvement: have a test property in the conf file, and only read the system property if this is unset. This will enable multiple MiniDFSClusters to come up in the same JVM, and handle shutdown/startup race conditions better, and avoid the java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The directory is already locked. messages Messages should log to the commons logging and not {{System.err}} and {{System.out}}. This enables containers to catch and stream better, and include more diagnostics such as timestamp and thread Id Class could benefit from a method to return the FS URI, rather than just the FS. This currently has to be worked around with some tricks involving a cached configuration {{waitActive()}} could get confused if localhost maps to an IPv6 address. Better to ask for 127.0.0.1 as the hostname; Junit test runs may need to be set up to force in IPv4 too. {{injectBlocks}} has a spelling error in the IOException, SumulatedFSDataset is the correct spelling -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants
[ https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123447#comment-13123447 ] Hudson commented on HDFS-2412: -- Integrated in Hadoop-Hdfs-trunk #824 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/824/]) HDFS-2412. Add backwards-compatibility layer for renamed FSConstants class. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180202 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java Add backwards-compatibility layer for FSConstants - Key: HDFS-2412 URL: https://issues.apache.org/jira/browse/HDFS-2412 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Blocker Fix For: 0.23.0 Attachments: hdfs-2412.txt HDFS-1620 renamed FSConstants which we believed to be a private class. But currently the public APIs for safe-mode and datanode reports depend on constants in FSConstants. This is breaking HBase builds against 0.23. This JIRA is to provide a backward-compatibility route. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched
[ https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123448#comment-13123448 ] Hudson commented on HDFS-2322: -- Integrated in Hadoop-Hdfs-trunk #824 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/824/]) HDFS-2322. the build fails in Windows because commons-daemon TAR cannot be fetched. (tucu) tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180094 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml the build fails in Windows because commons-daemon TAR cannot be fetched --- Key: HDFS-2322 URL: https://issues.apache.org/jira/browse/HDFS-2322 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 0.23.0, 0.24.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2322v1.patch For windows there is no commons-daemon TAR but a ZIP, plus the name follows a different convention. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps
[ https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123450#comment-13123450 ] Hudson commented on HDFS-2209: -- Integrated in Hadoop-Hdfs-trunk #824 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/824/]) HDFS-2209 datanode connection failure logging HDFS-2209. Make MiniDFS easier to embed in other apps. stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180353 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180077 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java Make MiniDFS easier to embed in other apps -- Key: HDFS-2209 URL: https://issues.apache.org/jira/browse/HDFS-2209 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 0.20.203.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch Original Estimate: 1h Time Spent: 1.5h Remaining Estimate: 2h I've been deploying MiniDFSCluster for some testing, and while using it/looking through the code I made some notes of where there are issues and improvement opportunities. This is mostly minor as its a test tool, but a risk of synchronization problems is there and does need addressing; the rest are all feature creep. Field {{nameNode}} should be marked as volatile as the shutdown operation can be in a different thread than startup. Best of all, add synchronized methods to set and get the field, as well as shutdown. The data dir is set from from System Properties. {code} base_dir = new File(System.getProperty(test.build.data, build/test/data), dfs/); data_dir = new File(base_dir, data); {code} This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and the constructor. Improvement: have a test property in the conf file, and only read the system property if this is unset. This will enable multiple MiniDFSClusters to come up in the same JVM, and handle shutdown/startup race conditions better, and avoid the java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The directory is already locked. messages Messages should log to the commons logging and not {{System.err}} and {{System.out}}. This enables containers to catch and stream better, and include more diagnostics such as timestamp and thread Id Class could benefit from a method to return the FS URI, rather than just the FS. This currently has to be worked around with some tricks involving a cached configuration {{waitActive()}} could get confused if localhost maps to an IPv6 address. Better to ask for 127.0.0.1 as the hostname; Junit test runs may need to be set up to force in IPv4 too. {{injectBlocks}} has a spelling error in the IOException, SumulatedFSDataset is the correct spelling -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators:
[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants
[ https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123467#comment-13123467 ] Hudson commented on HDFS-2412: -- Integrated in Hadoop-Hdfs-0.23-Build #33 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/33/]) HDFS-2412. Add backwards-compatibility layer for renamed FSConstants class. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180203 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java Add backwards-compatibility layer for FSConstants - Key: HDFS-2412 URL: https://issues.apache.org/jira/browse/HDFS-2412 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Blocker Fix For: 0.23.0 Attachments: hdfs-2412.txt HDFS-1620 renamed FSConstants which we believed to be a private class. But currently the public APIs for safe-mode and datanode reports depend on constants in FSConstants. This is breaking HBase builds against 0.23. This JIRA is to provide a backward-compatibility route. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps
[ https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123469#comment-13123469 ] Hudson commented on HDFS-2209: -- Integrated in Hadoop-Hdfs-0.23-Build #33 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/33/]) HDFS-2209 datanode connection failure logging HDFS-2209. Make MiniDFS easier to embed in other apps. stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180354 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180078 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java Make MiniDFS easier to embed in other apps -- Key: HDFS-2209 URL: https://issues.apache.org/jira/browse/HDFS-2209 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 0.20.203.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch Original Estimate: 1h Time Spent: 1.5h Remaining Estimate: 2h I've been deploying MiniDFSCluster for some testing, and while using it/looking through the code I made some notes of where there are issues and improvement opportunities. This is mostly minor as its a test tool, but a risk of synchronization problems is there and does need addressing; the rest are all feature creep. Field {{nameNode}} should be marked as volatile as the shutdown operation can be in a different thread than startup. Best of all, add synchronized methods to set and get the field, as well as shutdown. The data dir is set from from System Properties. {code} base_dir = new File(System.getProperty(test.build.data, build/test/data), dfs/); data_dir = new File(base_dir, data); {code} This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and the constructor. Improvement: have a test property in the conf file, and only read the system property if this is unset. This will enable multiple MiniDFSClusters to come up in the same JVM, and handle shutdown/startup race conditions better, and avoid the java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The directory is already locked. messages Messages should log to the commons logging and not {{System.err}} and {{System.out}}. This enables containers to catch and stream better, and include more diagnostics such as timestamp and thread Id Class could benefit from a method to return the FS URI, rather than just the FS. This currently has to be worked around with some tricks involving a cached configuration {{waitActive()}} could get confused if localhost maps to an IPv6 address. Better to ask for 127.0.0.1 as the hostname; Junit test runs may need to be set up to force in IPv4 too. {{injectBlocks}} has a spelling error in the IOException, SumulatedFSDataset is the correct spelling -- This message is automatically generated by JIRA.
[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps
[ https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123483#comment-13123483 ] Hudson commented on HDFS-2209: -- Integrated in Hadoop-Mapreduce-0.23-Build #40 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/40/]) HDFS-2209 datanode connection failure logging HDFS-2209. Make MiniDFS easier to embed in other apps. stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180354 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180078 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java Make MiniDFS easier to embed in other apps -- Key: HDFS-2209 URL: https://issues.apache.org/jira/browse/HDFS-2209 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 0.20.203.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch Original Estimate: 1h Time Spent: 1.5h Remaining Estimate: 2h I've been deploying MiniDFSCluster for some testing, and while using it/looking through the code I made some notes of where there are issues and improvement opportunities. This is mostly minor as its a test tool, but a risk of synchronization problems is there and does need addressing; the rest are all feature creep. Field {{nameNode}} should be marked as volatile as the shutdown operation can be in a different thread than startup. Best of all, add synchronized methods to set and get the field, as well as shutdown. The data dir is set from from System Properties. {code} base_dir = new File(System.getProperty(test.build.data, build/test/data), dfs/); data_dir = new File(base_dir, data); {code} This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and the constructor. Improvement: have a test property in the conf file, and only read the system property if this is unset. This will enable multiple MiniDFSClusters to come up in the same JVM, and handle shutdown/startup race conditions better, and avoid the java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The directory is already locked. messages Messages should log to the commons logging and not {{System.err}} and {{System.out}}. This enables containers to catch and stream better, and include more diagnostics such as timestamp and thread Id Class could benefit from a method to return the FS URI, rather than just the FS. This currently has to be worked around with some tricks involving a cached configuration {{waitActive()}} could get confused if localhost maps to an IPv6 address. Better to ask for 127.0.0.1 as the hostname; Junit test runs may need to be set up to force in IPv4 too. {{injectBlocks}} has a spelling error in the IOException, SumulatedFSDataset is the correct spelling -- This message is automatically generated
[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants
[ https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123481#comment-13123481 ] Hudson commented on HDFS-2412: -- Integrated in Hadoop-Mapreduce-0.23-Build #40 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/40/]) HDFS-2412. Add backwards-compatibility layer for renamed FSConstants class. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180203 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java Add backwards-compatibility layer for FSConstants - Key: HDFS-2412 URL: https://issues.apache.org/jira/browse/HDFS-2412 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Blocker Fix For: 0.23.0 Attachments: hdfs-2412.txt HDFS-1620 renamed FSConstants which we believed to be a private class. But currently the public APIs for safe-mode and datanode reports depend on constants in FSConstants. This is breaking HBase builds against 0.23. This JIRA is to provide a backward-compatibility route. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2412) Add backwards-compatibility layer for FSConstants
[ https://issues.apache.org/jira/browse/HDFS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123488#comment-13123488 ] Hudson commented on HDFS-2412: -- Integrated in Hadoop-Mapreduce-trunk #854 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/854/]) HDFS-2412. Add backwards-compatibility layer for renamed FSConstants class. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180202 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSConstants.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java Add backwards-compatibility layer for FSConstants - Key: HDFS-2412 URL: https://issues.apache.org/jira/browse/HDFS-2412 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Blocker Fix For: 0.23.0 Attachments: hdfs-2412.txt HDFS-1620 renamed FSConstants which we believed to be a private class. But currently the public APIs for safe-mode and datanode reports depend on constants in FSConstants. This is breaking HBase builds against 0.23. This JIRA is to provide a backward-compatibility route. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2322) the build fails in Windows because commons-daemon TAR cannot be fetched
[ https://issues.apache.org/jira/browse/HDFS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123489#comment-13123489 ] Hudson commented on HDFS-2322: -- Integrated in Hadoop-Mapreduce-trunk #854 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/854/]) HDFS-2322. the build fails in Windows because commons-daemon TAR cannot be fetched. (tucu) tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180094 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml the build fails in Windows because commons-daemon TAR cannot be fetched --- Key: HDFS-2322 URL: https://issues.apache.org/jira/browse/HDFS-2322 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 0.23.0, 0.24.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2322v1.patch For windows there is no commons-daemon TAR but a ZIP, plus the name follows a different convention. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2209) Make MiniDFS easier to embed in other apps
[ https://issues.apache.org/jira/browse/HDFS-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123491#comment-13123491 ] Hudson commented on HDFS-2209: -- Integrated in Hadoop-Mapreduce-trunk #854 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/854/]) HDFS-2209 datanode connection failure logging HDFS-2209. Make MiniDFS easier to embed in other apps. stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180353 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180077 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java Make MiniDFS easier to embed in other apps -- Key: HDFS-2209 URL: https://issues.apache.org/jira/browse/HDFS-2209 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 0.20.203.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.23.0, 0.24.0 Attachments: HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch, HDFS-2209.patch Original Estimate: 1h Time Spent: 1.5h Remaining Estimate: 2h I've been deploying MiniDFSCluster for some testing, and while using it/looking through the code I made some notes of where there are issues and improvement opportunities. This is mostly minor as its a test tool, but a risk of synchronization problems is there and does need addressing; the rest are all feature creep. Field {{nameNode}} should be marked as volatile as the shutdown operation can be in a different thread than startup. Best of all, add synchronized methods to set and get the field, as well as shutdown. The data dir is set from from System Properties. {code} base_dir = new File(System.getProperty(test.build.data, build/test/data), dfs/); data_dir = new File(base_dir, data); {code} This is done in {{formatDataNodeDirs()}} {{corruptBlockOnDataNode()}} and the constructor. Improvement: have a test property in the conf file, and only read the system property if this is unset. This will enable multiple MiniDFSClusters to come up in the same JVM, and handle shutdown/startup race conditions better, and avoid the java.io.IOException: Cannot lock storage build/test/data/dfs/name1. The directory is already locked. messages Messages should log to the commons logging and not {{System.err}} and {{System.out}}. This enables containers to catch and stream better, and include more diagnostics such as timestamp and thread Id Class could benefit from a method to return the FS URI, rather than just the FS. This currently has to be worked around with some tricks involving a cached configuration {{waitActive()}} could get confused if localhost maps to an IPv6 address. Better to ask for 127.0.0.1 as the hostname; Junit test runs may need to be set up to force in IPv4 too. {{injectBlocks}} has a spelling error in the IOException, SumulatedFSDataset is the correct spelling -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators:
[jira] [Created] (HDFS-2420) improve handling of datanode timeouts
improve handling of datanode timeouts - Key: HDFS-2420 URL: https://issues.apache.org/jira/browse/HDFS-2420 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ron Bodkin If a datanode ever times out on a heart beat, it gets marked dead permanently. I am finding that on AWS this is a periodic occurrence, i.e., datanodes time out although the datanode process is still alive. The current solution to this is to kill and restart each such process independently. It would be good if there were more retry logic (e.g., blacklisting the nodes but try heartbeats for a longer period before determining they are apparently dead). It would also be good if refreshNodes would check and attempt to recover timed out data nodes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2421) Improve the concurrency of SerialNumberMap in NameNode
Improve the concurrency of SerialNumberMap in NameNode --- Key: HDFS-2421 URL: https://issues.apache.org/jira/browse/HDFS-2421 Project: Hadoop HDFS Issue Type: Improvement Components: name-node Reporter: Hairong Kuang Assignee: Weiyan Wang Fix For: 0.24.0 After enabled permission checking in our HDFS test cluster, our benchmark observed a significant reduced concurrency in NameNode. Investigation showed that most threads were blocked at acquiring the lock of org.apache.hadoop.hdfs.server.namenode.SerialNumberManager$SerialNumberMap. We used concurrentHashMap to replace Hashmap + synchronized methods, which greatly improved the situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2422) temporary loss of NFS mount causes NN safe mode
temporary loss of NFS mount causes NN safe mode --- Key: HDFS-2422 URL: https://issues.apache.org/jira/browse/HDFS-2422 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.20.2, 0.20-append Reporter: Jeff Bean We encountered a situation where the namenode dropped into safe mode after a temporary outage of an NFS mount. At 12:10 the NFS server goes offline Oct 8 12:10:05 namenode kernel: nfs: server nfs host not responding, timed out This caused the namenode to conclude resource issues: 2011-10-08 12:10:34,848 WARN org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space available on volume 'nfs host' is 0, which is below the configured reserved amount 104857600 Temporary loss of NFS mount shouldn't cause safemode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2417) Warnings about attempt to override final parameter while getting delegation token
[ https://issues.apache.org/jira/browse/HDFS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-2417: --- Attachment: HDFS-2417.patch Rebased HADOOP-7664 to branch-0.20-security. The same patch also applies to branch-0.20-security-205. Warnings about attempt to override final parameter while getting delegation token - Key: HDFS-2417 URL: https://issues.apache.org/jira/browse/HDFS-2417 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.20.205.0 Reporter: Rajit Saha Attachments: HDFS-2417.patch I am seeing whenever I run any Mapreduce job and its trying to acquire delegation from NN, In JT log following warnings coming about a attempt to override final parameter: The log snippet in JT log 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.job.reuse.jvm.num.tasks; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.system.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: hadoop.job.history.user.location; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.local.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apred.job.tracker.http.address; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: d fs.data.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: d fs.http.address; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.admin.map.child.java.opts; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapreduce.history.server.http.address; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.history.server.embedded; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.jobtracker.split.metainfo.maxsize; Ignoring.2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.admin.reduce.child.java.opts; Ignoring.2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: h adoop.tmp.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.jobtracker.maxtasks.per.job; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.job.tracker; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: dfs.name.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apred.temp.dir; Ignoring.2011-10-07 20:29:19,103 INFO org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: registering token for renewal for service =NN IP:50470 and jobID = job_201110072015_0005 2011-10-07 20:29:19,103 INFO org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: registering token for renewal for service =NN IP:8020 and jobID = job_201110072015_0005 The STDOUT of distcp job when these warnings logged into JT log
[jira] [Updated] (HDFS-2414) TestDFSRollback fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2414: -- Resolution: Fixed Fix Version/s: 0.23.0 Target Version/s: 0.23.0, 0.24.0 (was: 0.24.0, 0.23.0) Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) committed to trunk and 23. Nice find :) TestDFSRollback fails intermittently Key: HDFS-2414 URL: https://issues.apache.org/jira/browse/HDFS-2414 Project: Hadoop HDFS Issue Type: Bug Components: name-node, test Affects Versions: 0.23.0 Reporter: Robert Joseph Evans Assignee: Todd Lipcon Priority: Critical Fix For: 0.23.0 Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz When running TestDFSRollback repeatedly in a loop I observed a failure rate of about 3%. Two separate stack traces are in the output and it appears to have something to do with not writing out a complete snapshot of the data for rollback. {noformat} --- Test set: org.apache.hadoop.hdfs.TestDFSRollback --- Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec FAILURE! testRollback(org.apache.hadoop.hdfs.TestDFSRollback) Time elapsed: 8.34 sec FAILURE! java.lang.AssertionError: File contents differed: /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991 at org.junit.Assert.fail(Assert.java:91) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236) at org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86) at org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145) at org.apache.maven.surefire.Surefire.run(Surefire.java:104) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017) {noformat} is the more common one, but I also saw {noformat} --- Test set: org.apache.hadoop.hdfs.TestDFSRollback --- Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec FAILURE! testRollback(org.apache.hadoop.hdfs.TestDFSRollback) Time elapsed: 7.304 sec FAILURE! junit.framework.AssertionFailedError: Expected substring 'file VERSION has layoutVersion missing' in exception but got: java.lang.IllegalArgumentException: Malformed \u encoding. at
[jira] [Commented] (HDFS-2413) Add public APIs for safemode
[ https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123609#comment-13123609 ] M. C. Srivas commented on HDFS-2413: This should be normal behavior part of all file-system ops. It is not practical for a programmer to wrap all file access calls (eg, write, mkdir, open) with wait for NN to leave safemode. Add public APIs for safemode Key: HDFS-2413 URL: https://issues.apache.org/jira/browse/HDFS-2413 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs client Affects Versions: 0.23.0 Reporter: Todd Lipcon Fix For: 0.23.0 Currently the APIs for safe-mode are part of DistributedFileSystem, which is supposed to be a private interface. However, dependent software often wants to wait until the NN is out of safemode. Though it could poll trying to create a file and catching SafeModeException, we should consider making some of these APIs public. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123610#comment-13123610 ] Hudson commented on HDFS-2414: -- Integrated in Hadoop-Common-trunk-Commit #1045 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1045/]) HDFS-2414. Fix TestDFSRollback to avoid spurious failures. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180541 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java TestDFSRollback fails intermittently Key: HDFS-2414 URL: https://issues.apache.org/jira/browse/HDFS-2414 Project: Hadoop HDFS Issue Type: Bug Components: name-node, test Affects Versions: 0.23.0 Reporter: Robert Joseph Evans Assignee: Todd Lipcon Priority: Critical Fix For: 0.23.0 Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz When running TestDFSRollback repeatedly in a loop I observed a failure rate of about 3%. Two separate stack traces are in the output and it appears to have something to do with not writing out a complete snapshot of the data for rollback. {noformat} --- Test set: org.apache.hadoop.hdfs.TestDFSRollback --- Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec FAILURE! testRollback(org.apache.hadoop.hdfs.TestDFSRollback) Time elapsed: 8.34 sec FAILURE! java.lang.AssertionError: File contents differed: /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991 at org.junit.Assert.fail(Assert.java:91) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236) at org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86) at org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145) at org.apache.maven.surefire.Surefire.run(Surefire.java:104) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017) {noformat} is the more
[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123611#comment-13123611 ] Hudson commented on HDFS-2414: -- Integrated in Hadoop-Hdfs-trunk-Commit #1123 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1123/]) HDFS-2414. Fix TestDFSRollback to avoid spurious failures. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180541 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java TestDFSRollback fails intermittently Key: HDFS-2414 URL: https://issues.apache.org/jira/browse/HDFS-2414 Project: Hadoop HDFS Issue Type: Bug Components: name-node, test Affects Versions: 0.23.0 Reporter: Robert Joseph Evans Assignee: Todd Lipcon Priority: Critical Fix For: 0.23.0 Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz When running TestDFSRollback repeatedly in a loop I observed a failure rate of about 3%. Two separate stack traces are in the output and it appears to have something to do with not writing out a complete snapshot of the data for rollback. {noformat} --- Test set: org.apache.hadoop.hdfs.TestDFSRollback --- Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec FAILURE! testRollback(org.apache.hadoop.hdfs.TestDFSRollback) Time elapsed: 8.34 sec FAILURE! java.lang.AssertionError: File contents differed: /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991 at org.junit.Assert.fail(Assert.java:91) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236) at org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86) at org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145) at org.apache.maven.surefire.Surefire.run(Surefire.java:104) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017) {noformat} is the more common
[jira] [Commented] (HDFS-2414) TestDFSRollback fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123615#comment-13123615 ] Hudson commented on HDFS-2414: -- Integrated in Hadoop-Mapreduce-trunk-Commit #1065 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1065/]) HDFS-2414. Fix TestDFSRollback to avoid spurious failures. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1180541 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java TestDFSRollback fails intermittently Key: HDFS-2414 URL: https://issues.apache.org/jira/browse/HDFS-2414 Project: Hadoop HDFS Issue Type: Bug Components: name-node, test Affects Versions: 0.23.0 Reporter: Robert Joseph Evans Assignee: Todd Lipcon Priority: Critical Fix For: 0.23.0 Attachments: hdfs-2414.txt, hdfs-2414.txt, hdfs-2414.txt, run-106-failed.tgz, run-158-failed.tgz When running TestDFSRollback repeatedly in a loop I observed a failure rate of about 3%. Two separate stack traces are in the output and it appears to have something to do with not writing out a complete snapshot of the data for rollback. {noformat} --- Test set: org.apache.hadoop.hdfs.TestDFSRollback --- Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.514 sec FAILURE! testRollback(org.apache.hadoop.hdfs.TestDFSRollback) Time elapsed: 8.34 sec FAILURE! java.lang.AssertionError: File contents differed: /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data2/current/VERSION=5b19197114fad0a254e3f318b7f14aec /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data1/current/VERSION=ea7b000a6a1711169fc7a836b240a991 at org.junit.Assert.fail(Assert.java:91) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertFileContentsSame(FSImageTestUtil.java:250) at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertParallelFilesAreIdentical(FSImageTestUtil.java:236) at org.apache.hadoop.hdfs.TestDFSRollback.checkResult(TestDFSRollback.java:86) at org.apache.hadoop.hdfs.TestDFSRollback.testRollback(TestDFSRollback.java:171) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145) at org.apache.maven.surefire.Surefire.run(Surefire.java:104) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017) {noformat} is the
[jira] [Updated] (HDFS-1762) Allow TestHDFSCLI to be run against a cluster
[ https://issues.apache.org/jira/browse/HDFS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Boudnik updated HDFS-1762: - Attachment: (was: HDFS-1762.common.patch) Allow TestHDFSCLI to be run against a cluster - Key: HDFS-1762 URL: https://issues.apache.org/jira/browse/HDFS-1762 Project: Hadoop HDFS Issue Type: Test Components: build, test Affects Versions: 0.22.0 Reporter: Tom White Assignee: Konstantin Boudnik Attachments: HDFS-1762-20.patch, HDFS-1762.hdfs.patch, HDFS-1762.hdfs.patch Currently TestHDFSCLI starts mini clusters to run tests against. It would be useful to be able to support running against arbitrary clusters for testing purposes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1762) Allow TestHDFSCLI to be run against a cluster
[ https://issues.apache.org/jira/browse/HDFS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123627#comment-13123627 ] Konstantin Boudnik commented on HDFS-1762: -- For complete picture look at the patches associated with HADOOP-7730 and MAPREDUCE-3156 Allow TestHDFSCLI to be run against a cluster - Key: HDFS-1762 URL: https://issues.apache.org/jira/browse/HDFS-1762 Project: Hadoop HDFS Issue Type: Test Components: build, test Affects Versions: 0.22.0 Reporter: Tom White Assignee: Konstantin Boudnik Attachments: HDFS-1762-20.patch, HDFS-1762.hdfs.patch, HDFS-1762.hdfs.patch Currently TestHDFSCLI starts mini clusters to run tests against. It would be useful to be able to support running against arbitrary clusters for testing purposes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-1762) Allow TestHDFSCLI to be run against a cluster
[ https://issues.apache.org/jira/browse/HDFS-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Boudnik updated HDFS-1762: - Attachment: (was: HDFS-1762.mapreduce.patch) Allow TestHDFSCLI to be run against a cluster - Key: HDFS-1762 URL: https://issues.apache.org/jira/browse/HDFS-1762 Project: Hadoop HDFS Issue Type: Test Components: build, test Affects Versions: 0.22.0 Reporter: Tom White Assignee: Konstantin Boudnik Attachments: HDFS-1762-20.patch, HDFS-1762.hdfs.patch, HDFS-1762.hdfs.patch Currently TestHDFSCLI starts mini clusters to run tests against. It would be useful to be able to support running against arbitrary clusters for testing purposes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira