The directories are fine. These are expected exception (I believe) for tests that check exceptional cases.

On Jan 9, 2008, at 11:50 PM, Enis Soztutar (JIRA) wrote:


[ https://issues.apache.org/jira/browse/HADOOP-2569? page=com.atlassian.jira.plugin.system.issuetabpanels:comment- tabpanel&focusedCommentId=12557563#action_12557563 ]

Enis Soztutar commented on HADOOP-2569:
---------------------------------------

Well it seems to have stuck in lines below because the log after calling initFiles() is not printed.
{code}
   LOG.info("Creating " + numFiles + " file(s) in " + multiFileDir);
    for(int i=0; i<numFiles ;i++) {
      Path path = new Path(multiFileDir, "file_" + i);
       FSDataOutputStream out = fs.create(path);
       if (numBytes == -1) {
         numBytes = rand.nextInt(MAX_BYTES);
       }
       for(int j=0; j< numBytes; j++) {
         out.write(rand.nextInt());
       }
       out.close();
       if(LOG.isDebugEnabled()) {
LOG.debug("Created file " + path + " with length " + numBytes);
       }
       lengths.put(path.getName(), new Long(numBytes));
    }
{code}

I assume there is some space related problem in the build server. The test log reports many other problems such as :
{noformat}
...
[junit] 2008-01-09 11:35:47,543 WARN dfs.DataNode (DataNode.java:copyBlock(1144)) - Got exception while serving blk_-4990242691406600933 to 127.0.0.1:35838: java.io.IOException: Block blk_-4990242691406600933 is not valid. [junit] at org.apache.hadoop.dfs.FSDataset.getBlockFile (FSDataset.java:549) [junit] at org.apache.hadoop.dfs.FSDataset.getMetaFile (FSDataset.java:466) [junit] at org.apache.hadoop.dfs.FSDataset.getMetaDataInputStream (FSDataset.java:480) [junit] at org.apache.hadoop.dfs.DataNode$BlockSender.<init> (DataNode.java:1298) [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.copyBlock (DataNode.java:1114) [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.run (DataNode.java:877)
    [junit]     at java.lang.Thread.run(Thread.java:595)
...
[junit] 2008-01-09 11:35:56,574 ERROR dfs.DataNode (DataNode.java:run(1738)) - Exception: java.lang.reflect.UndeclaredThrowableException [junit] at org.apache.hadoop.dfs.$Proxy1.blockReport(Unknown Source) [junit] at org.apache.hadoop.dfs.DataNode.offerService (DataNode.java:616)
    [junit]     at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1736)
    [junit]     at java.lang.Thread.run(Thread.java:595)
    [junit] Caused by: java.lang.InterruptedException
    [junit]     at java.lang.Object.wait(Native Method)
    [junit]     at org.apache.hadoop.ipc.Client.call(Client.java:504)
    [junit]     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    [junit]     ... 4 more
...
[junit] 2008-01-09 11:37:31,788 WARN dfs.DataNode (DataNode.java:sendChunk(1401)) - Could not read checksum for data at offset 1536 for block blk_-5451683627314743712 got : java.io.EOFException [junit] at java.io.DataInputStream.readFully (DataInputStream.java:178) [junit] at org.apache.hadoop.dfs.DataNode$BlockSender.sendChunk (DataNode.java:1399) [junit] at org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock (DataNode.java:1449) [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.readBlock (DataNode.java:920) [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.run (DataNode.java:865)
    [junit]     at java.lang.Thread.run(Thread.java:595)
...
[junit] 2008-01-09 12:36:51,570 WARN fs.AllocatorPerContext (LocalDirAllocator.java:createPath(256)) - org.apache.hadoop.util.DiskChecker$DiskErrorException: directory is not writable: build/test/temp/tmp4 [junit] at org.apache.hadoop.util.DiskChecker.checkDir (DiskChecker.java:85) [junit] at org.apache.hadoop.fs.LocalDirAllocator $AllocatorPerContext.createPath(LocalDirAllocator.java:253) [junit] at org.apache.hadoop.fs.LocalDirAllocator $AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:301) [junit] at org.apache.hadoop.fs.LocalDirAllocator $AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:326) [junit] at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite (LocalDirAllocator.java:155) [junit] at org.apache.hadoop.fs.TestLocalDirAllocator.createTempFile (TestLocalDirAllocator.java:77)
...
{noformat}

can you check for the server whether there is a space problem or the directories are writable (I don't have a hudson account).


Unit test times out on Solaris nightly build: mapred.TestMultiFileInputFormat --------------------------------------------------------------------- --------

                Key: HADOOP-2569
URL: https://issues.apache.org/jira/browse/ HADOOP-2569
            Project: Hadoop
         Issue Type: Bug
         Components: mapred
   Affects Versions: 0.16.0
        Environment: solaris
           Reporter: Mukund Madhugiri
           Assignee: Enis Soztutar
           Priority: Blocker
            Fix For: 0.16.0


Unit test failed in the nightly build: org.apache.hadoop.mapred.TestMultiFileInputFormat.unknown
junit.framework.AssertionFailedError: Timeout occurred
Logs are at:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/361/

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Reply via email to