[ 
https://issues.apache.org/jira/browse/HADOOP-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12898387#action_12898387
 ] 

Meng Mao commented on HADOOP-6092:
----------------------------------

I see. Yes, our physical disks are definitely full to bursting. Looking on one 
of the nodes that recently reported several no disk space errors, our 
configured temp dir (/disk1/hadoop/hadoop-metadata/cache/mapred/local) has 62 
GB of stuff in it. 
And only 16GB used by the current job.

There's about 1500 uncleaned attempt_ directories lying around in local/. The 
earliest have been around since the last restart of our grid. Is it safe to 
delete these? How do they accumulate?

> No space left on device
> -----------------------
>
>                 Key: HADOOP-6092
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6092
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.19.0
>         Environment: ubuntu0.8.4
>            Reporter: mawanqiang
>
> Exception in thread "main" org.apache.hadoop.fs.FSError: java.io.IOException: 
> No space left on device
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:199)
>         at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>         at java.io.FilterOutputStream.close(FilterOutputStream.java:140)
>         at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
>         at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
>         at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:339)
>         at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
>         at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
>         at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:825)
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1142)
>         at org.apache.nutch.indexer.Indexer.index(Indexer.java:72)
>         at org.apache.nutch.indexer.Indexer.run(Indexer.java:92)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.nutch.indexer.Indexer.main(Indexer.java:101)
> Caused by: java.io.IOException: No space left on device
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:197)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to