What is the amount of data you are attempting to crunch in one MR job? Note
that Map intermediate outputs are written to disk before being sent to
reducers and this counts for non-DFS usage. So to say grossly, if your
input is 14 GB, you surely need more than 2 or 3 x 14G free space overall
to do the whole process.


On Thu, Jan 17, 2013 at 7:20 PM, Vikas Jadhav <vikascjadha...@gmail.com>wrote:

> Here is my problem
> I am using bulk loading for Hbase using MapReduce Program
>
>  Configured Capacity : 15.5 GB DFS Used : 781.91 MB Non DFS Used : 1.68 GBDFS 
> Remaining:13.06 GBDFS Used%:4.93 %DFS Remaining%:84.26 %
>
> But when i run my program
>
> Configured Capacity : 15.5 GB DFS Used : 819.69 MB Non DFS Used : 14.59 GBDFS 
> Remaining:116.01 MBDFS Used%:5.16 %DFS Remaining%:0.73 %
>
> I have disable WAL in hbase  still its consuming non-dfs
> and  my program fails have tried lot times but no luck
>
> SO WHAT SHLOULD I DO SO THAT NON DFS WILL NOT CONSUME WHOLE SPACE
>
> I AM ALSO NOT ABLE TO FIND REASON BEHIND usage of non-dfs space to this
> large extent
>
>
> 13/01/17 08:44:07 INFO mapred.JobClient:  map 83% reduce 22%
> 13/01/17 08:44:09 INFO mapred.JobClient:  map 84% reduce 22%
> 13/01/17 08:44:12 INFO mapred.JobClient:  map 85% reduce 22%
> 13/01/17 08:44:15 INFO mapred.JobClient:  map 86% reduce 22%
> 13/01/17 08:44:18 INFO mapred.JobClient:  map 87% reduce 22%
> 13/01/17 08:44:22 INFO mapred.JobClient:  map 79% reduce 22%
> 13/01/17 08:44:25 INFO mapred.JobClient:  map 80% reduce 25%
> 13/01/17 08:44:27 INFO mapred.JobClient: Task Id :
> attempt_201301170837_0004_m_000009_0, Status : FAILED
> FSError: java.io.IOException: No space left on device
> java.lang.Throwable: Child Error
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Creation of
> /tmp/hadoop-cfgsas1/mapred/local/userlogs/job_201301170837_0004/attempt_201301170837_0004_m_000009_0.cleanup
> failed.
>         at
> org.apache.hadoop.mapred.TaskLog.createTaskAttemptLogDir(TaskLog.java:104)
>         at
> org.apache.hadoop.mapred.DefaultTaskController.createLogDir(DefaultTaskController.java:71)
>         at
> org.apache.hadoop.mapred.TaskRunner.prepareLogFiles(TaskRunner.java:316)
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228)
> 13/01/17 08:44:27 WARN mapred.JobClient: Error reading task outputhttp://
> rdcesx12078.race.sas.com:50060/tasklog?plaintext=true&attemptid=attempt_201301170837_0004_m_000009_0&filter=stdout
> 13/01/17 08:44:27 WARN mapred.JobClient: Error reading task outputhttp://
> rdcesx12078.race.sas.com:50060/tasklog?plaintext=true&attemptid=attempt_201301170837_0004_m_000009_0&filter=stderr
> 13/01/17 08:44:28 INFO mapred.JobClient:  map 82% reduce 25%
> 13/01/17 08:44:31 INFO mapred.JobClient:  map 83% reduce 25%
> 13/01/17 08:45:07 INFO mapred.JobClient:  map 83% reduce 27%
>
>
>
>
> On Wed, Jan 16, 2013 at 6:43 PM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
>
>> I think you can still run with the OS on another drive, or on a live
>> USB drive, or even on the memory only, loaded from the network while
>> the server is booting from the network drive, etc. No?
>>
>> JM
>>
>> 2013/1/16, Mohammad Tariq <donta...@gmail.com>:
>> > That would be really cool Chris.
>> > +1 for that.
>> >
>> > Warm Regards,
>> > Tariq
>> > https://mtariq.jux.com/
>> > cloudfront.blogspot.com
>> >
>> >
>> > On Wed, Jan 16, 2013 at 6:15 PM, Chris Embree <cemb...@gmail.com>
>> wrote:
>> >
>> >> Ha, you joke, but we're planning on running with no local OS.  If it
>> >> works
>> >> as planned I'll post a nice summary of our approach. :)
>> >>
>> >>
>> >> On Wed, Jan 16, 2013 at 2:53 AM, Harsh J <ha...@cloudera.com> wrote:
>> >>
>> >>> <kidding> Wipe your OS out. </kidding>
>> >>>
>> >>> Please read: http://search-hadoop.com/m/9Qwi9UgMOe
>> >>>
>> >>>
>> >>> On Wed, Jan 16, 2013 at 1:16 PM, Vikas Jadhav
>> >>> <vikascjadha...@gmail.com>wrote:
>> >>>
>> >>>>
>> >>>> how to remove non dfs space from  hadoop cluster
>> >>>>
>> >>>> --
>> >>>> *
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> Thanx and Regards*
>> >>>> * Vikas Jadhav*
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Harsh J
>> >>>
>> >>
>> >>
>> >
>>
>
>
>
> --
> *
> *
> *
>
> Thanx and Regards*
> * Vikas Jadhav*
>



-- 
Harsh J

Reply via email to