java.io.IOException: All datanodes are bad. Aborting...

2009-05-06 Thread Mayuran Yogarajah
I have 2 directories listed for dfs.data.dir and one of them got to 100% 
used
during a job I ran.  I suspect thats the reason I see this error in the 
logs.


Can someone please confirm this?

thanks


Re: java.io.IOException: All datanodes are bad. Aborting...

2008-07-11 Thread Shengkai Zhu
Did you do the clean work on all the datanodes?
rm -Rf /path/to/my/hadoop/dfs/data


On 6/20/08, novice user <[EMAIL PROTECTED]> wrote:
>
>
> Hi Mori Bellamy,
> I did this twice.  and still the same problem is persisting. I don't know
> how to solve this issue. If any one know the answer, please let me know.
>
> Thanks
>
> Mori Bellamy wrote:
> >
> > That's bizarre. I'm not sure why your DFS would have magically gotten
> > full. Whenever hadoop gives me trouble, i try the following sequence
> > of commands
> >
> > stop-all.sh
> > rm -Rf /path/to/my/hadoop/dfs/data
> > hadoop namenode -format
> > start-all.sh
> >
> > maybe you would get some luck if you ran that on all of the machines?
> > (of course, don't run it if you don't want to lose all of that "data")
> > On Jun 19, 2008, at 4:32 AM, novice user wrote:
> >
> >>
> >> Hi Every one,
> >> I am running a simple map-red application similar to k-means. But,
> >> when I
> >> ran it in on single machine, it went fine with out any issues. But,
> >> when I
> >> ran the same on a hadoop cluster of 9 machines. It fails saying
> >> java.io.IOException: All datanodes are bad. Aborting...
> >>
> >> Here is more explanation about the problem:
> >> I tried to upgrade my hadoop cluster to hadoop-17. During this
> >> process, I
> >> made a mistake of not installing hadoop on all machines. So, the
> >> upgrade
> >> failed. Nor I was able to roll back.  So, I re-formatted the name node
> >> afresh. and then hadoop installation was successful.
> >>
> >> Later, when I ran my map-reduce job, it ran successfully,but  the
> >> same job
> >> with zero reduce tasks is failing with the error as:
> >> java.io.IOException: All datanodes  are bad. Aborting...
> >>
> >> When I looked into the data nodes, I figured out that file system is
> >> 100%
> >> full with different directories of name "subdir" in
> >> hadoop-username/dfs/data/current directory. I am wondering where I
> >> went
> >> wrong.
> >> Can some one please help me on this?
> >>
> >> The same job went fine on a single machine with same amount of input
> >> data.
> >>
> >> Thanks
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/java.io.IOException%3A-All-datanodes-are-bad.-Aborting...-tp18006296p18006296.html
> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >>
> >
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/java.io.IOException%3A-All-datanodes-are-bad.-Aborting...-tp18006296p18022330.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 

朱盛凯

Jash Zhu

复旦大学软件学院

Software School, Fudan University


Re: java.io.IOException: All datanodes are bad. Aborting...

2008-06-19 Thread novice user

Hi Mori Bellamy,
 I did this twice.  and still the same problem is persisting. I don't know
how to solve this issue. If any one know the answer, please let me know.

Thanks

Mori Bellamy wrote:
> 
> That's bizarre. I'm not sure why your DFS would have magically gotten  
> full. Whenever hadoop gives me trouble, i try the following sequence  
> of commands
> 
> stop-all.sh
> rm -Rf /path/to/my/hadoop/dfs/data
> hadoop namenode -format
> start-all.sh
> 
> maybe you would get some luck if you ran that on all of the machines?  
> (of course, don't run it if you don't want to lose all of that "data")
> On Jun 19, 2008, at 4:32 AM, novice user wrote:
> 
>>
>> Hi Every one,
>> I am running a simple map-red application similar to k-means. But,  
>> when I
>> ran it in on single machine, it went fine with out any issues. But,  
>> when I
>> ran the same on a hadoop cluster of 9 machines. It fails saying
>> java.io.IOException: All datanodes are bad. Aborting...
>>
>> Here is more explanation about the problem:
>> I tried to upgrade my hadoop cluster to hadoop-17. During this  
>> process, I
>> made a mistake of not installing hadoop on all machines. So, the  
>> upgrade
>> failed. Nor I was able to roll back.  So, I re-formatted the name node
>> afresh. and then hadoop installation was successful.
>>
>> Later, when I ran my map-reduce job, it ran successfully,but  the  
>> same job
>> with zero reduce tasks is failing with the error as:
>> java.io.IOException: All datanodes  are bad. Aborting...
>>
>> When I looked into the data nodes, I figured out that file system is  
>> 100%
>> full with different directories of name "subdir" in
>> hadoop-username/dfs/data/current directory. I am wondering where I  
>> went
>> wrong.
>> Can some one please help me on this?
>>
>> The same job went fine on a single machine with same amount of input  
>> data.
>>
>> Thanks
>>
>>
>>
>> -- 
>> View this message in context:
>> http://www.nabble.com/java.io.IOException%3A-All-datanodes-are-bad.-Aborting...-tp18006296p18006296.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/java.io.IOException%3A-All-datanodes-are-bad.-Aborting...-tp18006296p18022330.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: java.io.IOException: All datanodes are bad. Aborting...

2008-06-19 Thread Mori Bellamy
That's bizarre. I'm not sure why your DFS would have magically gotten  
full. Whenever hadoop gives me trouble, i try the following sequence  
of commands


stop-all.sh
rm -Rf /path/to/my/hadoop/dfs/data
hadoop namenode -format
start-all.sh

maybe you would get some luck if you ran that on all of the machines?  
(of course, don't run it if you don't want to lose all of that "data")

On Jun 19, 2008, at 4:32 AM, novice user wrote:



Hi Every one,
I am running a simple map-red application similar to k-means. But,  
when I
ran it in on single machine, it went fine with out any issues. But,  
when I

ran the same on a hadoop cluster of 9 machines. It fails saying
java.io.IOException: All datanodes are bad. Aborting...

Here is more explanation about the problem:
I tried to upgrade my hadoop cluster to hadoop-17. During this  
process, I
made a mistake of not installing hadoop on all machines. So, the  
upgrade

failed. Nor I was able to roll back.  So, I re-formatted the name node
afresh. and then hadoop installation was successful.

Later, when I ran my map-reduce job, it ran successfully,but  the  
same job

with zero reduce tasks is failing with the error as:
java.io.IOException: All datanodes  are bad. Aborting...

When I looked into the data nodes, I figured out that file system is  
100%

full with different directories of name "subdir" in
hadoop-username/dfs/data/current directory. I am wondering where I  
went

wrong.
Can some one please help me on this?

The same job went fine on a single machine with same amount of input  
data.


Thanks



--
View this message in context: 
http://www.nabble.com/java.io.IOException%3A-All-datanodes-are-bad.-Aborting...-tp18006296p18006296.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.





java.io.IOException: All datanodes are bad. Aborting...

2008-06-19 Thread novice user

Hi Every one,
 I am running a simple map-red application similar to k-means. But, when I
ran it in on single machine, it went fine with out any issues. But, when I
ran the same on a hadoop cluster of 9 machines. It fails saying 
java.io.IOException: All datanodes are bad. Aborting...

Here is more explanation about the problem:
I tried to upgrade my hadoop cluster to hadoop-17. During this process, I
made a mistake of not installing hadoop on all machines. So, the upgrade
failed. Nor I was able to roll back.  So, I re-formatted the name node
afresh. and then hadoop installation was successful.

Later, when I ran my map-reduce job, it ran successfully,but  the same job
with zero reduce tasks is failing with the error as:
java.io.IOException: All datanodes  are bad. Aborting...

When I looked into the data nodes, I figured out that file system is 100%
full with different directories of name "subdir" in
hadoop-username/dfs/data/current directory. I am wondering where I went
wrong. 
Can some one please help me on this?

The same job went fine on a single machine with same amount of input data.

Thanks



-- 
View this message in context: 
http://www.nabble.com/java.io.IOException%3A-All-datanodes-are-bad.-Aborting...-tp18006296p18006296.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.