Check integrity of the file system, and check the replication factor, by
mistake if default is left as 3 or so.  if you have hbase configured check
hbck if everything is fine with the cluster.



∞
Shashwat Shriparv



On Sun, Jan 20, 2013 at 3:09 PM, xin jiang <jiangxin1...@gmail.com> wrote:

>
>
> On Sun, Jan 20, 2013 at 7:50 AM, Mohammad Tariq <donta...@gmail.com>wrote:
>
>> Hey Jean,
>>
>>         Feels good to hear that ;) I don't have to feel
>> myself a solitary yonker anymore.
>>
>> Since I am working on a single node, the problem
>> becomes more sever. I don't have any other node
>> where MR files could get replicated.
>>
>> Warm Regards,
>> Tariq
>>  https://mtariq.jux.com/
>> cloudfront.blogspot.com
>>
>>
>> On Sun, Jan 20, 2013 at 5:08 AM, Jean-Marc Spaggiari <
>> jean-m...@spaggiari.org> wrote:
>>
>>> Hi Tariq,
>>>
>>> I often have to force HDFS to go out of safe mode manually when I
>>> restart my cluster (or after power outage).... I never tought about
>>> reporting that ;)
>>>
>>> I'm using hadoop-1.0.3. I think it was because of the MR files still
>>> not replicated on enought nodes. But not 100% sure.
>>>
>>> JM
>>>
>>> 2013/1/19, Mohammad Tariq <donta...@gmail.com>:
>>> > Hello list,
>>> >
>>> >        I have a pseudo distributed setup on my laptop. Everything was
>>> > working fine untill now. But lately HDFS has started taking a lot of
>>> time
>>> > to leave the safemode. Infact, I have to it manuaaly most of the times
>>> as
>>> > TT and Hbase daemons get disturbed because of this.
>>> >
>>> > I am using hadoop-1.0.4. Is it a problem with this version? I have
>>> never
>>> > faced any such issue with older versions. Or, is something going wrong
>>> on
>>> > my side??
>>> >
>>> > Thank you so much for your precious time.
>>> >
>>> > Warm Regards,
>>> > Tariq
>>> > https://mtariq.jux.com/
>>> > cloudfront.blogspot.com
>>> >
>>>
>>
>>
>

Reply via email to