[jira] [Created] (HDFS-2067) Bump DATA_TRANSFER_VERSION in trunk for protobufs

2011-06-10 Thread Todd Lipcon (JIRA)
Bump DATA_TRANSFER_VERSION in trunk for protobufs
-

 Key: HDFS-2067
 URL: https://issues.apache.org/jira/browse/HDFS-2067
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.23.0


Forgot to bump DATA_TRANSFER_VERSION in HDFS-2058. We need to do this since the 
protobufs are incompatible with the old writables.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-2066) Create a package and individual class files for DataTransferProtocol

2011-06-10 Thread Tsz Wo (Nicholas), SZE (JIRA)
Create a package and individual class files for DataTransferProtocol


 Key: HDFS-2066
 URL: https://issues.apache.org/jira/browse/HDFS-2066
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE


{{DataTransferProtocol}} contains quite a few classes.  It is better to create 
a package and put the classes into individual files.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 735 - Still Failing

2011-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/735/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2700 lines...]
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 9.045 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.36 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.753 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 18.523 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 30.515 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.908 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.17 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 13.418 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.793 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.365 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.084 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.112 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.089 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestPendingReplication
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.321 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.056 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.678 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 10.846 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.556 sec
[junit] Running org.apache.hadoop.net.TestNetworkTopology
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.108 sec
[junit] Running org.apache.hadoop.security.TestPermission
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.27 sec

checkfailure:
[touch] Creating 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:732:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:689:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:757:
 Tests failed!

Total time: 9 minutes 46 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed 
results to identify the command that failed
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)





[jira] [Created] (HDFS-2065) Fix NPE in DFSClient.getFileChecksum

2011-06-10 Thread Bharath Mundlapudi (JIRA)
Fix NPE in DFSClient.getFileChecksum


 Key: HDFS-2065
 URL: https://issues.apache.org/jira/browse/HDFS-2065
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
 Fix For: 0.23.0


The following code can throw NPE if callGetBlockLocations returns null.

If server returns null 

{code}
List locatedblocks
= callGetBlockLocations(namenode, src, 0, 
Long.MAX_VALUE).getLocatedBlocks();
{code}

The right fix for this is server should throw right exception.



--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-22-branch - Build # 65 - Still Failing

2011-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/65/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3046 lines...]

compile-hdfs-test:
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache

run-test-hdfs-excluding-commit-and-smoke:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.fs.TestFiListPath
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.498 sec
[junit] Running org.apache.hadoop.fs.TestFiRename
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.626 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 15.678 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHftp
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 38.452 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.491 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 210.399 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 280.269 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.794 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:747:
 Tests failed!

Total time: 73 minutes 34 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time 
until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in 
the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2

Error Message:
127.0.0.1:34837is not an underUtilized node

Stack Trace:
junit.framework.AssertionFailedError: 127.0.0.1:34837is not an underUtilized 
node
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes

Hadoop-Hdfs-trunk-Commit - Build # 734 - Still Failing

2011-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/734/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2684 lines...]
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 8.881 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.531 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.8 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
[junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 6.105 sec
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestBackupNode FAILED
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 31.357 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.928 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.158 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 13.12 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.591 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.161 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.085 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.087 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.794 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestPendingReplication
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.328 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.056 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.444 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 10.441 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.531 sec
[junit] Running org.apache.hadoop.net.TestNetworkTopology
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.104 sec
[junit] Running org.apache.hadoop.security.TestPermission
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.422 sec

checkfailure:
[touch] Creating 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:728:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:685:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:753:
 Tests failed!

Total time: 9 minutes 17 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint

Error Message:
Port in use: 0.0.0.0:50105

Stack Trace:
junit.framework.AssertionFailedError: Port in use: 0.0.0.0:50105
at 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:143)
at 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.__CLR3_0_2xuql33zmx(TestBackupNode.java:103)
at 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:102)


FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to ident

Hadoop-Hdfs-trunk-Commit - Build # 733 - Still Failing

2011-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/733/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2683 lines...]
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 9.643 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.637 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.74 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 18.261 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 30.186 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 5.124 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.173 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 12.946 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.874 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.26 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.089 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.97 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.003 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestPendingReplication
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.294 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.057 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.903 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 10.606 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.482 sec
[junit] Running org.apache.hadoop.net.TestNetworkTopology
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.114 sec
[junit] Running org.apache.hadoop.security.TestPermission
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.448 sec

checkfailure:
[touch] Creating 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:728:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:685:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:753:
 Tests failed!

Total time: 9 minutes 5 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed 
results to identify the command that failed
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)





[jira] [Resolved] (HDFS-1295) Improve namenode restart times by short-circuiting the first block reports from datanodes

2011-06-10 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley resolved HDFS-1295.
--

   Resolution: Fixed
Fix Version/s: Federation Branch
 Hadoop Flags: [Reviewed]

Committed to yahoo-merge branch.  Thanks for the review, Suresh!

> Improve namenode restart times by short-circuiting the first block reports 
> from datanodes
> -
>
> Key: HDFS-1295
> URL: https://issues.apache.org/jira/browse/HDFS-1295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: dhruba borthakur
>Assignee: Matt Foley
> Fix For: Federation Branch, 0.23.0
>
> Attachments: HDFS-1295_delta_for_trunk.patch, 
> HDFS-1295_for_ymerge.patch, HDFS-1295_for_ymerge_v2.patch, 
> IBR_shortcut_v2a.patch, IBR_shortcut_v3atrunk.patch, 
> IBR_shortcut_v4atrunk.patch, IBR_shortcut_v4atrunk.patch, 
> IBR_shortcut_v4atrunk.patch, IBR_shortcut_v6atrunk.patch, 
> IBR_shortcut_v7atrunk.patch, shortCircuitBlockReport_1.txt
>
>
> The namenode restart is dominated by the performance of processing block 
> reports. On a 2000 node cluster with 90 million blocks,  block report 
> processing takes 30 to 40 minutes. The namenode "diffs" the contents of the 
> incoming block report with the contents of the blocks map, and then applies 
> these diffs to the blocksMap, but in reality there is no need to compute the 
> "diff" because this is the first block report from the datanode.
> This code change improves block report processing time by 300%.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-1666) TestAuthorizationFilter is failing

2011-06-10 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HDFS-1666:
---


This was deliberately left open since we need to either address theunderlying 
issue, or mark hdfsproxy as broken in any future release.

> TestAuthorizationFilter is failing
> --
>
> Key: HDFS-1666
> URL: https://issues.apache.org/jira/browse/HDFS-1666
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: contrib/hdfsproxy
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Konstantin Boudnik
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: hdfs-1666-disable-tests.txt
>
>
> two test cases were failing for a number of builds (see attached logs)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-1666) TestAuthorizationFilter is failing

2011-06-10 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HDFS-1666.
-

   Resolution: Fixed
Fix Version/s: 0.22.0
 Assignee: Todd Lipcon

This was committed to trunk.

> TestAuthorizationFilter is failing
> --
>
> Key: HDFS-1666
> URL: https://issues.apache.org/jira/browse/HDFS-1666
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: contrib/hdfsproxy
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Konstantin Boudnik
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: hdfs-1666-disable-tests.txt
>
>
> two test cases were failing for a number of builds (see attached logs)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-1952) FSEditLog.open() appears to succeed even if all EDITS directories fail

2011-06-10 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HDFS-1952:
---


This still needs to be committed to 0.22.

> FSEditLog.open() appears to succeed even if all EDITS directories fail
> --
>
> Key: HDFS-1952
> URL: https://issues.apache.org/jira/browse/HDFS-1952
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Matt Foley
>Assignee: Andrew Wang
>  Labels: newbie
> Attachments: hdfs-1952-0.22.patch, hdfs-1952.patch, hdfs-1952.patch, 
> hdfs-1952.patch
>
>
> FSEditLog.open() appears to "succeed" even if all of the individual 
> directories failed to allow creation of an EditLogOutputStream.  The problem 
> and solution are essentially similar to that of HDFS-1505.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-1952) FSEditLog.open() appears to succeed even if all EDITS directories fail

2011-06-10 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HDFS-1952.
-

Resolution: Fixed

Resolving this, since it was committed to trunk.

> FSEditLog.open() appears to succeed even if all EDITS directories fail
> --
>
> Key: HDFS-1952
> URL: https://issues.apache.org/jira/browse/HDFS-1952
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Matt Foley
>Assignee: Andrew Wang
>  Labels: newbie
> Attachments: hdfs-1952-0.22.patch, hdfs-1952.patch, hdfs-1952.patch, 
> hdfs-1952.patch
>
>
> FSEditLog.open() appears to "succeed" even if all of the individual 
> directories failed to allow creation of an EditLogOutputStream.  The problem 
> and solution are essentially similar to that of HDFS-1505.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 732 - Still Failing

2011-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/732/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2685 lines...]
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 8.972 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.343 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.78 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 18.442 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 31.656 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 5.038 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.164 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 13.196 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.701 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.226 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.111 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.117 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestPendingReplication
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.308 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.057 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.789 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 10.931 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.584 sec
[junit] Running org.apache.hadoop.net.TestNetworkTopology
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.115 sec
[junit] Running org.apache.hadoop.security.TestPermission
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.301 sec

checkfailure:
[touch] Creating 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:728:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:685:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:753:
 Tests failed!

Total time: 9 minutes 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed 
results to identify the command that failed
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)





[jira] [Created] (HDFS-2064) Warm HA NameNode going Hot

2011-06-10 Thread Konstantin Shvachko (JIRA)
Warm HA NameNode going Hot
--

 Key: HDFS-2064
 URL: https://issues.apache.org/jira/browse/HDFS-2064
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


This is the design for automatic hot HA for HDFS NameNode. It involves use of 
HA software and LoadReplicator - external to Hadoop components, which 
substantially simplify the architecture by separating HA- from Hadoop-specific 
problems. Without the external components it provides warm standby with manual 
failover.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-2063) Recent bin/lib changes broke the libhdfs test

2011-06-10 Thread Eli Collins (JIRA)
Recent bin/lib changes broke the libhdfs test
-

 Key: HDFS-2063
 URL: https://issues.apache.org/jira/browse/HDFS-2063
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Eli Collins


Looks like the recent bin/script shuffling in HDFS-1963 broke the libhdfs test. 
This works on 22.

{noformat}
$ ant -Dlibhdfs=true compile test
...
 [exec] Hadoop common not found.
 [exec] /home/eli/src/hdfs3/src/c++/libhdfs/tests/test-libhdfs.sh: line 
181: /home/eli/src/hdfs3/bin/hadoop-daemon.sh: No such file or directory
 [exec] /home/eli/src/hdfs3/src/c++/libhdfs/tests/test-libhdfs.sh: line 
182: /home/eli/src/hdfs3/bin/hadoop-daemon.sh: No such file or directory
 [exec] Wait 30s for the datanode to start up...
{noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-2062) The Hudson pre-commit job should run the libhdfs test

2011-06-10 Thread Eli Collins (JIRA)
The Hudson pre-commit job should run the libhdfs test
-

 Key: HDFS-2062
 URL: https://issues.apache.org/jira/browse/HDFS-2062
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Eli Collins


The libhdfs test does not currently run as part of the pre-commit Hudson job. 
It should as libhdfs is not contrib. The command to run is: {{ant 
-Dlibhdfs=true compile test}}. We just need to make sure the Hudson slaves are 
capable of building the native code and running the test.



--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 731 - Still Failing

2011-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/731/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2685 lines...]
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 9.118 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.227 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.771 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 18.281 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 30.446 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.981 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.173 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 13.109 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.647 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.229 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.115 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.038 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestPendingReplication
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.299 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.056 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.697 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 10.915 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.38 sec
[junit] Running org.apache.hadoop.net.TestNetworkTopology
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.115 sec
[junit] Running org.apache.hadoop.security.TestPermission
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.234 sec

checkfailure:
[touch] Creating 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:728:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:685:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:753:
 Tests failed!

Total time: 9 minutes 47 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed 
results to identify the command that failed
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)





[jira] [Created] (HDFS-2061) two minor bugs in BlockManager block report processing

2011-06-10 Thread Matt Foley (JIRA)
two minor bugs in BlockManager block report processing
--

 Key: HDFS-2061
 URL: https://issues.apache.org/jira/browse/HDFS-2061
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Matt Foley
Assignee: Matt Foley
Priority: Minor
 Fix For: 0.23.0


In a recent review of HDFS-1295 patches (speedup for block report processing), 
found two very minor bugs in BlockManager, as documented in following comments.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-2060) DFS client RPCs using protobufs

2011-06-10 Thread Todd Lipcon (JIRA)
DFS client RPCs using protobufs
---

 Key: HDFS-2060
 URL: https://issues.apache.org/jira/browse/HDFS-2060
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon


The most important place for wire-compatibility in DFS is between clients and 
the cluster, since lockstep upgrade is very difficult and a single client may 
want to talk to multiple server versions. So, I'd like to focus this JIRA on 
making the RPCs between the DFS client and the NN/DNs wire-compatible using 
protocol buffer based serialization.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-2059) FsShell should support symbol links

2011-06-10 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-2059.
---

Resolution: Duplicate

This is a dupe of HDFS-1788. When FsShell has been ported over to FileContext 
(HADOOP-6424) it can support symlinks.

> FsShell should support symbol links
> ---
>
> Key: HDFS-2059
> URL: https://issues.apache.org/jira/browse/HDFS-2059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs client
>Affects Versions: 0.21.0
>Reporter: Bochun Bai
>
> bin/hadoop fs -cat 
> Above command is not working.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: TestHDFSCLI is broken

2011-06-10 Thread Eli Collins
Awesome. Thanks Daryn!

On Fri, Jun 10, 2011 at 6:33 AM, Daryn Sharp  wrote:
> Hi Eli,
>
> I noticed the issue yesterday.  It's from a recent change of mine in common, 
> and I'm not sure how I didn't catch the problem...  I must have missed doing 
> a veryclean in hdfs before running the tests.  I'll have a patch up this 
> morning.
>
> Daryn
>
>
> On Jun 9, 2011, at 5:33 PM, Eli Collins wrote:
>
>> Hey guys,
>>
>> TestHDFSCLI is failing on trunk. It's been failing for several days
>> (so it's not HDFS-494).
>>
>> https://builds.apache.org/job/Hadoop-Hdfs-trunk/lastCompletedBuild/testReport
>>
>> Is anyone looking at this or aware of what change would have caused
>> this?  Looks like it started on June 7th.
>>
>> The output of the test is noisy enough that it's hard to quickly see
>> what caused the issue.
>>
>> Thanks,
>> Eli
>
>


Re: TestHDFSCLI is broken

2011-06-10 Thread Daryn Sharp
Hi Eli,

I noticed the issue yesterday.  It's from a recent change of mine in common, 
and I'm not sure how I didn't catch the problem...  I must have missed doing a 
veryclean in hdfs before running the tests.  I'll have a patch up this morning.

Daryn


On Jun 9, 2011, at 5:33 PM, Eli Collins wrote:

> Hey guys,
> 
> TestHDFSCLI is failing on trunk. It's been failing for several days
> (so it's not HDFS-494).
> 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/lastCompletedBuild/testReport
> 
> Is anyone looking at this or aware of what change would have caused
> this?  Looks like it started on June 7th.
> 
> The output of the test is noisy enough that it's hard to quickly see
> what caused the issue.
> 
> Thanks,
> Eli



Hadoop-Hdfs-trunk - Build # 693 - Still Failing

2011-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/693/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 942968 lines...]
[junit] 2011-06-10 12:55:33,017 INFO  datanode.DataNode 
(DataNode.java:shutdown(1620)) - Waiting for threadgroup to exit, active 
threads is 0
[junit] 2011-06-10 12:55:33,017 WARN  datanode.DataNode 
(DataNode.java:offerService(1076)) - BPOfferService for block 
pool=BP-1502019095-127.0.1.1-1307710531644 received 
exception:java.lang.InterruptedException
[junit] 2011-06-10 12:55:33,017 WARN  datanode.DataNode 
(DataNode.java:run(1229)) - DatanodeRegistration(127.0.0.1:36406, 
storageID=DS-1481721775-127.0.1.1-36406-1307710532068, infoPort=35344, 
ipcPort=57926, storageInfo=lv=-36;cid=testClusterID;nsid=1170596907;c=0) ending 
block pool service for: BP-1502019095-127.0.1.1-1307710531644
[junit] 2011-06-10 12:55:33,118 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:removeBlockPool(276)) - Removed 
bpid=BP-1502019095-127.0.1.1-1307710531644 from blockPoolScannerMap
[junit] 2011-06-10 12:55:33,118 INFO  datanode.DataNode 
(FSDataset.java:shutdownBlockPool(2569)) - Removing block pool 
BP-1502019095-127.0.1.1-1307710531644
[junit] 2011-06-10 12:55:33,118 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-06-10 12:55:33,118 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-06-10 12:55:33,219 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(3085)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-06-10 12:55:33,219 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2011-06-10 12:55:33,219 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(580)) - Number of transactions: 6 Total time 
for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of 
syncs: 3 SyncTimes(ms): 12 12 
[junit] 2011-06-10 12:55:33,222 INFO  ipc.Server (Server.java:stop(1715)) - 
Stopping server on 53021
[junit] 2011-06-10 12:55:33,222 INFO  ipc.Server (Server.java:run(1539)) - 
IPC Server handler 0 on 53021: exiting
[junit] 2011-06-10 12:55:33,222 INFO  ipc.Server (Server.java:run(505)) - 
Stopping IPC Server listener on 53021
[junit] 2011-06-10 12:55:33,222 INFO  ipc.Server (Server.java:run(647)) - 
Stopping IPC Server Responder
[junit] 2011-06-10 12:55:33,222 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stop(199)) - Stopping DataNode metrics system...
[junit] 2011-06-10 12:55:33,223 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source JvmMetrics
[junit] 2011-06-10 12:55:33,223 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
NameNodeActivity
[junit] 2011-06-10 12:55:33,223 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
RpcActivityForPort53021
[junit] 2011-06-10 12:55:33,223 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
RpcDetailedActivityForPort53021
[junit] 2011-06-10 12:55:33,224 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source FSNamesystem
[junit] 2011-06-10 12:55:33,224 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
RpcActivityForPort57926
[junit] 2011-06-10 12:55:33,224 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
RpcDetailedActivityForPort57926
[junit] 2011-06-10 12:55:33,224 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source JvmMetrics-1
[junit] 2011-06-10 12:55:33,225 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
DataNodeActivity-h8.grid.sp2.yahoo.net-36406
[junit] 2011-06-10 12:55:33,225 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
RpcActivityForPort34282
[junit] 2011-06-10 12:55:33,225 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
RpcDetailedActivityForPort34282
[junit] 2011-06-10 12:55:33,225 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source JvmMetrics-2
[junit] 2011-06-10 12:55:33,226 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stopSources(408)) - Stopping metrics source 
DataNodeActivity-h8.grid.sp2.yahoo.net-35397
[junit] 2011-06-1

[jira] [Created] (HDFS-2059) FsShell should support symbol links

2011-06-10 Thread Bochun Bai (JIRA)
FsShell should support symbol links
---

 Key: HDFS-2059
 URL: https://issues.apache.org/jira/browse/HDFS-2059
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client
Affects Versions: 0.21.0
Reporter: Bochun Bai


bin/hadoop fs -cat 
Above command is not working.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira