[
https://issues.apache.org/jira/browse/HADOOP-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12488115
]
Raghu Angadi commented on HADOOP-1189:
--------------------------------------
Yes. We are actually using it in our cluster.
> Still seeing some unexpected 'No space left on device' exceptions
> -----------------------------------------------------------------
>
> Key: HADOOP-1189
> URL: https://issues.apache.org/jira/browse/HADOOP-1189
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.12.2
> Reporter: Raghu Angadi
> Assigned To: Raghu Angadi
> Fix For: 0.13.0
>
> Attachments: HADOOP-1189-2.patch, HADOOP-1189-3.patch
>
>
> One of the datanodes has one full partition (disk) out of four. Expected
> behaviour is that datanode should skip this partition and use only the other
> three. HADOOP-990 fixed some bugs related to this. It seems to work ok but
> some exceptions are still seeping through. In one case there 33 of these out
> 1200+ blocks written to this node. Not sure what caused this. I will submit a
> patch to the prints a more useful message throw the original exception.
> Two unlikely reasons I can think of are 2% reserve space (8GB in this case)
> is not enough or client some how still says block size is zero in some cases.
> Better error message should help here.
> If you see small number of these exceptions compared to number of blocks
> written, for now you don't need change anything.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.