It seems that this piece of code , does a df to get the amount of free space
( got this info from the irc channel ) 
And it is trying to do a Number conversion on information returned by df

/Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda2            1891213200 -45291780 1838887216   -  /
.
.

Of course in my case the Use% is "-" and that is an issue :)

BTW , this datanode had stopped responding.. it is always good idea to do df
thus , to make sure that this does not happen during job execution and may
be even as a part of the ./hadoop dfsadmin -report pbly.

Will close the thread , when this is resolved with the disk issue ( which it
seems to be ).







vishalsant wrote:
> 
> Hi guys, 
>    
>  I see the exception below when I launch a job
> 
> 
> 0/04/27 10:54:16 INFO mapred.JobClient:  map 0% reduce 0%
> 10/04/27 10:54:22 INFO mapred.JobClient: Task Id :
> attempt_201004271050_0001_m_005760_0, Status : FAILED
> Error initializing attempt_201004271050_0001_m_005760_0:
> java.lang.NumberFormatException: For input string: "-"
>       at
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
>       at java.lang.Integer.parseInt(Integer.java:476)
>       at java.lang.Integer.parseInt(Integer.java:499)
>       at org.apache.hadoop.fs.DF.parseExecResult(DF.java:125)
>       at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
>       at org.apache.hadoop.util.Shell.run(Shell.java:134)
>       at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
>       at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:329)
>       at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
>       at 
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:751)
>       at
> org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1665)
>       at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
>       at
> org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1630)
> 
>  
> 
> Few things
> 
> * I ran fsck on the namenode and no corrupted blocks reported.
> * The -report from dfsadmin , says the datanode is up.
>  
> 

-- 
View this message in context: 
http://old.nabble.com/DataNode-not-able-to-spawn-a-Task-tp28378863p28379065.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to