See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/546/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE
###########################
[...truncated 463989 lines...]
[junit] 2011-01-08 12:04:54,920 INFO mortbay.log (?:invoke0(?)) - Started
selectchannelconnec...@localhost:51129
[junit] 2011-01-08 12:04:54,920 INFO namenode.NameNode
(NameNode.java:run(523)) - NameNode Web-server up at: localhost/127.0.0.1:51129
[junit] 2011-01-08 12:04:54,921 INFO ipc.Server (Server.java:run(608)) -
IPC Server Responder: starting
[junit] 2011-01-08 12:04:54,921 INFO ipc.Server (Server.java:run(443)) -
IPC Server listener on 40548: starting
[junit] 2011-01-08 12:04:54,922 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 0 on 40548: starting
[junit] 2011-01-08 12:04:54,922 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 3 on 40548: starting
[junit] 2011-01-08 12:04:54,922 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 2 on 40548: starting
[junit] 2011-01-08 12:04:54,922 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 1 on 40548: starting
[junit] 2011-01-08 12:04:54,923 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 4 on 40548: starting
[junit] 2011-01-08 12:04:54,923 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 5 on 40548: starting
[junit] 2011-01-08 12:04:54,923 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 6 on 40548: starting
[junit] 2011-01-08 12:04:54,924 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 7 on 40548: starting
[junit] 2011-01-08 12:04:54,924 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 8 on 40548: starting
[junit] 2011-01-08 12:04:54,924 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 9 on 40548: starting
[junit] 2011-01-08 12:04:54,924 INFO namenode.NameNode
(NameNode.java:initialize(390)) - NameNode up at: localhost/127.0.0.1:40548
[junit] Starting DataNode 0 with dfs.datanode.data.dir:
file:/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/,file:/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data2/
[junit] 2011-01-08 12:04:55,079 INFO datanode.DataNode
(DataNode.java:initDataXceiver(472)) - Opened info server at 50470
[junit] 2011-01-08 12:04:55,083 INFO datanode.DataNode
(DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
[junit] 2011-01-08 12:04:55,090 INFO common.Storage
(DataStorage.java:recoverTransitionRead(127)) - Storage directory
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1
is not formatted.
[junit] 2011-01-08 12:04:55,090 INFO common.Storage
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
[junit] 2011-01-08 12:04:55,094 INFO common.Storage
(DataStorage.java:recoverTransitionRead(127)) - Storage directory
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data2
is not formatted.
[junit] 2011-01-08 12:04:55,094 INFO common.Storage
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
[junit] 2011-01-08 12:04:55,147 INFO datanode.DataNode
(FSDataset.java:registerMBean(1772)) - Registered FSDatasetStatusMBean
[junit] 2011-01-08 12:04:55,155 INFO datanode.DirectoryScanner
(DirectoryScanner.java:<init>(149)) - scan starts at 1294498367155 with
interval 21600000
[junit] 2011-01-08 12:04:55,157 INFO http.HttpServer
(HttpServer.java:addGlobalFilter(409)) - Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
[junit] 2011-01-08 12:04:55,160 INFO http.HttpServer
(HttpServer.java:start(579)) - Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the
listener on 0
[junit] 2011-01-08 12:04:55,160 INFO http.HttpServer
(HttpServer.java:start(584)) - listener.getLocalPort() returned 38806
webServer.getConnectors()[0].getLocalPort() returned 38806
[junit] 2011-01-08 12:04:55,161 INFO http.HttpServer
(HttpServer.java:start(617)) - Jetty bound to port 38806
[junit] 2011-01-08 12:04:55,161 INFO mortbay.log (?:invoke0(?)) -
jetty-6.1.14
[junit] 2011-01-08 12:04:55,319 INFO mortbay.log (?:invoke0(?)) - Started
selectchannelconnec...@localhost:38806
[junit] 2011-01-08 12:04:55,321 INFO jvm.JvmMetrics
(JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with
processName=DataNode, sessionId=null - already initialized
[junit] 2011-01-08 12:04:55,326 INFO ipc.Server (Server.java:run(338)) -
Starting SocketReader
[junit] 2011-01-08 12:04:55,326 INFO metrics.RpcMetrics
(RpcMetrics.java:<init>(63)) - Initializing RPC Metrics with hostName=DataNode,
port=48609
[junit] 2011-01-08 12:04:55,327 INFO metrics.RpcDetailedMetrics
(RpcDetailedMetrics.java:<init>(57)) - Initializing RPC Metrics with
hostName=DataNode, port=48609
[junit] 2011-01-08 12:04:55,328 INFO datanode.DataNode
(DataNode.java:initIpcServer(432)) - dnRegistration =
DatanodeRegistration(h9.grid.sp2.yahoo.net:50470, storageID=, infoPort=38806,
ipcPort=48609)
[junit] 2011-01-08 12:04:55,333 INFO hdfs.StateChange
(FSNamesystem.java:registerDatanode(2514)) - BLOCK*
NameSystem.registerDatanode: node registration from 127.0.0.1:50470 storage
DS-1758152017-127.0.1.1-50470-1294488295332
[junit] 2011-01-08 12:04:55,340 INFO net.NetworkTopology
(NetworkTopology.java:add(331)) - Adding a new node:
/default-rack/127.0.0.1:50470
[junit] 2011-01-08 12:04:55,345 INFO datanode.DataNode
(DataNode.java:register(714)) - New storage id
DS-1758152017-127.0.1.1-50470-1294488295332 is assigned to data-node
127.0.0.1:50470
[junit] 2011-01-08 12:04:55,346 INFO datanode.DataNode
(DataNode.java:run(1438)) - DatanodeRegistration(127.0.0.1:50470,
storageID=DS-1758152017-127.0.1.1-50470-1294488295332, infoPort=38806,
ipcPort=48609)In DataNode.run, data =
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-01-08 12:04:55,346 INFO ipc.Server (Server.java:run(608)) -
IPC Server Responder: starting
[junit] 2011-01-08 12:04:55,347 INFO ipc.Server (Server.java:run(443)) -
IPC Server listener on 48609: starting
[junit] 2011-01-08 12:04:55,347 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 0 on 48609: starting
[junit] 2011-01-08 12:04:55,347 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 1 on 48609: starting
[junit] 2011-01-08 12:04:55,348 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 2 on 48609: starting
[junit] 2011-01-08 12:04:55,348 INFO datanode.DataNode
(DataNode.java:offerService(904)) - using BLOCKREPORT_INTERVAL of 21600000msec
Initial delay: 0msec
[junit] 2011-01-08 12:04:55,360 INFO datanode.DataNode
(DataNode.java:blockReport(1143)) - BlockReport of 0 blocks got processed in 8
msecs
[junit] 2011-01-08 12:04:55,360 INFO datanode.DataNode
(DataNode.java:offerService(946)) - Starting Periodic block scanner.
[junit] 2011-01-08 12:04:55,435 INFO FSNamesystem.audit
(FSNamesystem.java:logAuditEvent(148)) - ugi=hudson ip=/127.0.0.1
cmd=create src=/testWriteConf.xml dst=null
perm=hudson:supergroup:rw-r--r--
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 60.074 sec
Build timed out. Aborting
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any)
##############################
5 tests failed.
REGRESSION:
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite
Error Message:
java.io.FileNotFoundException:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml
(Too many open files)
Stack Trace:
java.lang.RuntimeException: java.io.FileNotFoundException:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml
(Too many open files)
at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1523)
at
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1388)
at
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1334)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:577)
at
org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:781)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:313)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:302)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.__CLR3_0_2u5mf5tqxn(TestFileConcurrentReader.java:275)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite(TestFileConcurrentReader.java:274)
Caused by: java.io.FileNotFoundException:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml
(Too many open files)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at java.io.FileInputStream.<init>(FileInputStream.java:66)
at
sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
at
sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
at
com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:653)
at
com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:186)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107)
at
com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:225)
at
com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1437)
REGRESSION:
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer
Error Message:
Error while running command to get file permissions : java.io.IOException:
Cannot run program "/bin/ls": java.io.IOException: error=24, Too many open
files at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at
org.apache.hadoop.util.Shell.runCommand(Shell.java:206) at
org.apache.hadoop.util.Shell.run(Shell.java:188) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:467) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:450) at
org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:565)
at
org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:49)
at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:491)
at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:466)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:131)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:148) at
org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1594)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1572)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1518)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1485)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:630)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:464)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:186) at
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71) at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:178)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
at junit.framework.TestCase.runBare(TestCase.java:132) at
junit.framework.TestResult$1.protect(TestResult.java:110) at
junit.framework.TestResult.runProtected(TestResult.java:128) at
junit.framework.TestResult.run(TestResult.java:113) at
junit.framework.TestCase.run(TestCase.java:124) at
junit.framework.TestSuite.runTest(TestSuite.java:232) at
junit.framework.TestSuite.run(TestSuite.java:227) at
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
Caused by: java.io.IOException: java.io.IOException: error=24, Too many open
files at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at
java.lang.ProcessImpl.start(ProcessImpl.java:65) at
java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 34 more
Stack Trace:
java.lang.RuntimeException: Error while running command to get file permissions
: java.io.IOException: Cannot run program "/bin/ls": java.io.IOException:
error=24, Too many open files
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
at org.apache.hadoop.util.Shell.run(Shell.java:188)
at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
at
org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:565)
at
org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:49)
at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:491)
at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:466)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:131)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:148)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1594)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1572)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1518)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1485)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:630)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:464)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:186)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:178)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
Caused by: java.io.IOException: java.io.IOException: error=24, Too many open
files
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:516)
at
org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:466)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:131)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:148)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1594)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1572)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1518)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1485)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:630)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:464)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:186)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:178)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
FAILED:
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite
Error Message:
Cannot lock storage
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1.
The directory is already locked.
Stack Trace:
java.io.IOException: Cannot lock storage
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1.
The directory is already locked.
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1342)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1360)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1408)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:202)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:451)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:186)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:178)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
FAILED: org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.testWriteConf
Error Message:
test timed out after 60000 milliseconds
Stack Trace:
java.lang.Exception: test timed out after 60000 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:485)
at
org.apache.hadoop.hdfs.DFSOutputStream.waitAndQueueCurrentPacket(DFSOutputStream.java:1169)
at
org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:1228)
at
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:161)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:104)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:90)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:263)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:106)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:190)
at
com.sun.org.apache.xml.internal.serializer.ToStream.characters(ToStream.java:1499)
at
com.sun.org.apache.xml.internal.serializer.ToUnknownStream.characters(ToUnknownStream.java:789)
at
com.sun.org.apache.xml.internal.serializer.ToUnknownStream.characters(ToUnknownStream.java:323)
at
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:240)
at
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
at
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
at
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
at
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:132)
at
com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:94)
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transformIdentity(TransformerImpl.java:662)
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:708)
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:313)
at
org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1608)
at
org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1559)
at
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.__CLR3_0_28n7kbs1103(TestWriteConfigurationToDFS.java:46)
at
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.testWriteConf(TestWriteConfigurationToDFS.java:33)
FAILED:
org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage
is corrupt with MD5 checksum of a5ff67bedc4155cdb986ff24b0bc922a but expecting
ed2520fe516bc29595e3e9e159e68de8
Stack Trace:
java.io.IOException: Image file
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage
is corrupt with MD5 checksum of a5ff67bedc4155cdb986ff24b0bc922a but expecting
ed2520fe516bc29595e3e9e159e68de8
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at
org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4tka(TestStorageRestore.java:316)
at
org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)