Hadoop-Hdfs-trunk - Build # 559 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/559/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 649311 lines...] [junit] 2011-01-21 12:32:41,804 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down. [junit] 2011-01-21 12:32:41,804 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(831)) - Shutting down DataNode 0 [junit] 2011-01-21 12:32:41,906 INFO ipc.Server (Server.java:stop(1610)) - Stopping server on 60758 [junit] 2011-01-21 12:32:41,906 INFO ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 60758: exiting [junit] 2011-01-21 12:32:41,907 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 60758 [junit] 2011-01-21 12:32:41,907 INFO datanode.DataNode (DataNode.java:shutdown(785)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2011-01-21 12:32:41,907 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:37057, storageID=DS-1481487460-127.0.1.1-37057-1295613150825, infoPort=56900, ipcPort=60758):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134) [junit] at java.lang.Thread.run(Thread.java:619) [junit] [junit] 2011-01-21 12:32:41,907 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder [junit] 2011-01-21 12:32:41,909 INFO datanode.DataNode (DataNode.java:shutdown(785)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-01-21 12:32:42,010 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread. [junit] 2011-01-21 12:32:42,010 INFO datanode.DataNode (DataNode.java:run(1459)) - DatanodeRegistration(127.0.0.1:37057, storageID=DS-1481487460-127.0.1.1-37057-1295613150825, infoPort=56900, ipcPort=60758):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'} [junit] 2011-01-21 12:32:42,010 INFO ipc.Server (Server.java:stop(1610)) - Stopping server on 60758 [junit] 2011-01-21 12:32:42,010 INFO datanode.DataNode (DataNode.java:shutdown(785)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-01-21 12:32:42,011 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-01-21 12:32:42,011 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-01-21 12:32:42,011 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down. [junit] 2011-01-21 12:32:42,116 WARN namenode.FSNamesystem (FSNamesystem.java:run(2844)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [junit] 2011-01-21 12:32:42,117 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 7 3 [junit] 2011-01-21 12:32:42,116 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted [junit] 2011-01-21 12:32:42,119 INFO ipc.Server (Server.java:stop(1610)) - Stopping server on 38925 [junit] 2011-01-21 12:32:42,119 INFO ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 38925: exiting [junit] 2011-01-21 12:32:42,119 INFO ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 38925: exiting [junit] 2011-01-21 12:32:42,120 INFO ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 38925: exiting [junit] 2011-01-21 12:32:42,120 INFO ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 38925: exiting [junit] 2011-01-21 12:32:42,120 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder [junit] 2011-01-21 12:32:42,120 INFO ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 38925: exiting [junit] 2011-01-21 12:32:42,120 INFO ipc.Server (Server.java:run(1443)) -
Hadoop-Hdfs-22-branch - Build # 16 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-22-branch/16/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2903 lines...] [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 13.096 sec [junit] Running org.apache.hadoop.fs.permission.TestStickyBit [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 9.424 sec [junit] Running org.apache.hadoop.hdfs.TestBlockMissingException [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 18.076 sec [junit] Running org.apache.hadoop.hdfs.TestByteRangeInputStream [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.15 sec [junit] Running org.apache.hadoop.hdfs.TestClientBlockVerification [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 3.084 sec [junit] Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.417 sec [junit] Running org.apache.hadoop.hdfs.TestCrcCorruption [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 24.048 sec [junit] Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.838 sec [junit] Running org.apache.hadoop.hdfs.TestDFSClientRetries [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 46.043 sec [junit] Running org.apache.hadoop.hdfs.TestDFSPermission [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 16.944 sec [junit] Running org.apache.hadoop.hdfs.TestDFSRemove [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 13.98 sec [junit] Running org.apache.hadoop.hdfs.TestDFSStartupVersions [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 16.646 sec [junit] Running org.apache.hadoop.hdfs.TestDFSUpgrade [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 25.361 sec [junit] Running org.apache.hadoop.hdfs.TestDFSUtil [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.194 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 176.79 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeConfig [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.528 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeDeath [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 121.615 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeRegistration [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.966 sec [junit] Running org.apache.hadoop.hdfs.TestDecommission [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 32.299 sec [junit] Running org.apache.hadoop.hdfs.TestDeprecatedKeys [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.23 sec [junit] Running org.apache.hadoop.hdfs.TestDfsOverAvroRpc [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.87 sec [junit] Running org.apache.hadoop.hdfs.TestFileAppend4 [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 9.496 sec [junit] Running org.apache.hadoop.hdfs.TestFileConcurrentReader [junit] Tests run: 7, Failures: 0, Errors: 2, Time elapsed: 16.123 sec [junit] Test org.apache.hadoop.hdfs.TestFileConcurrentReader FAILED [junit] Running org.apache.hadoop.hdfs.TestFileCreation [junit] Tests run: 12, Failures: 0, Errors: 0, Time elapsed: 47.422 sec [junit] Running org.apache.hadoop.hdfs.TestFileCreationClient Build timed out. Aborting [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 4 tests failed. REGRESSION: org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer Error Message: Too many open files Stack Trace: java.io.IOException: Too many open files at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method) at sun.nio.ch.EPollArrayWrapper.init(EPollArrayWrapper.java:68) at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:52) at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18) at java.nio.channels.Selector.open(Selector.java:209) at org.apache.hadoop.ipc.Server$Responder.init(Server.java:602) at org.apache.hadoop.ipc.Server.init(Server.java:1500) at org.apache.hadoop.ipc.RPC$Server.init(RPC.java:394) at
[jira] Created: (HDFS-1591) Fix javac, javadoc, findbugs warnings
Fix javac, javadoc, findbugs warnings - Key: HDFS-1591 URL: https://issues.apache.org/jira/browse/HDFS-1591 Project: Hadoop HDFS Issue Type: Bug Reporter: Po Cheung Assignee: Po Cheung Fix For: 0.22.0 Split from HADOOP-6642 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-1586) Add InterfaceAudience annotation to MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HDFS-1586. --- Resolution: Fixed I committed the patch. Add InterfaceAudience annotation to MiniDFSCluster -- Key: HDFS-1586 URL: https://issues.apache.org/jira/browse/HDFS-1586 Project: Hadoop HDFS Issue Type: Improvement Reporter: Suresh Srinivas Assignee: Suresh Srinivas Attachments: HDFS-1586.1.patch, HDFS-1586.patch MiniDFSCluster is used both by hdfs and mapreduce. Annotation needs to be added to this class to reflect this. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-1588) Add dfs.hosts.exclude to DFSConfigKeys and use constant in stead of hardcoded string
[ https://issues.apache.org/jira/browse/HDFS-1588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HDFS-1588. --- Resolution: Fixed Hadoop Flags: [Reviewed] I committed the patch. Thank you Erik. Add dfs.hosts.exclude to DFSConfigKeys and use constant in stead of hardcoded string Key: HDFS-1588 URL: https://issues.apache.org/jira/browse/HDFS-1588 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 0.23.0 Reporter: Erik Steffl Assignee: Erik Steffl Fix For: 0.23.0 Attachments: HDFS-1588-0.23.patch -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-1448) Create multi-format parser for edits logs file, support binary and XML formats initially
[ https://issues.apache.org/jira/browse/HDFS-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE resolved HDFS-1448. -- Resolution: Fixed I have committed it. Thanks, Erik! Erik, please add Release Note for the new feature. Create multi-format parser for edits logs file, support binary and XML formats initially Key: HDFS-1448 URL: https://issues.apache.org/jira/browse/HDFS-1448 Project: Hadoop HDFS Issue Type: New Feature Components: tools Affects Versions: 0.22.0 Reporter: Erik Steffl Assignee: Erik Steffl Fix For: 0.23.0 Attachments: editsStored, HDFS-1448-0.22-1.patch, HDFS-1448-0.22-2.patch, HDFS-1448-0.22-3.patch, HDFS-1448-0.22-4.patch, HDFS-1448-0.22-5.patch, HDFS-1448-0.22.patch, Viewer hierarchy.pdf Create multi-format parser for edits logs file, support binary and XML formats initially. Parsing should work from any supported format to any other supported format (e.g. from binary to XML and from XML to binary). The binary format is the format used by FSEditLog class to read/write edits file. Primary reason to develop this tool is to help with troubleshooting, the binary format is hard to read and edit (for human troubleshooters). Longer term it could be used to clean up and minimize parsers for fsimage and edits files. Edits parser OfflineEditsViewer is written in a very similar fashion to OfflineImageViewer. Next step would be to merge OfflineImageViewer and OfflineEditsViewer and use the result in both FSImage and FSEditLog. This is subject to change, specifically depending on adoption of avro (which would completely change how objects are serialized as well as provide ways to convert files to different formats). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.