Re: stuck in safe mode after restarting dfs after found dead node

2012-07-14 Thread Edward Capriolo
If the files are gone forever you should run:

hadoop fsck -delete /

To acknowledge they have moved on from existence. Otherwise things
that attempt to read this files will, to put it in a technical way,
BARF.

On Fri, Jul 13, 2012 at 12:22 PM, Juan Pino  wrote:
> Thank you for your reply. I ran that command before and it works fine but
> hadoop fs -ls diplays the list of files in the user's directory but then
> hangs for quite a while (~ 10 minutes) before
> handing the command line prompt back, then if I rerun the same command
> there is no problem. That is why I would like to be able to leave safe mode
> automatically (at least I think it's related).
> Also, in the hdfs web page, clicking on the Live Nodes or Dead Nodes links
> hangs forever but I am able to browse the file
> system without any problem with the browser.
> There is no error in the logs.
> Please let me know what sort of details I can provide to help resolve this
> issue.
>
> Best,
>
> Juan
>
> On Fri, Jul 13, 2012 at 4:10 PM, Edward Capriolo wrote:
>
>> If the datanode is not coming back you have to explicitly tell hadoop
>> to leave safemode.
>>
>> http://hadoop.apache.org/common/docs/r0.17.2/hdfs_user_guide.html#Safemode
>>
>> hadoop dfsadmin -safemode leave
>>
>>
>> On Fri, Jul 13, 2012 at 9:35 AM, Juan Pino 
>> wrote:
>> > Hi,
>> >
>> > I can't get HDFS to leave safe mode automatically. Here is what I did:
>> >
>> > -- there was a dead node
>> > -- I stopped dfs
>> > -- I restarted dfs
>> > -- Safe mode wouldn't leave automatically
>> >
>> > I am using hadoop-1.0.2
>> >
>> > Here are the logs:
>> >
>> > end of hadoop-hadoop-namenode.log (attached):
>> >
>> > 2012-07-13 13:22:29,372 INFO org.apache.hadoop.hdfs.StateChange: STATE*
>> Safe
>> > mode ON.
>> > The ratio of reported blocks 0.9795 has not reached the threshold 0.9990.
>> > Safe mode will be turned off automatically.
>> > 2012-07-13 13:22:29,375 INFO org.apache.hadoop.hdfs.StateChange: STATE*
>> Safe
>> > mode extension entered.
>> > The ratio of reported blocks 0.9990 has reached the threshold 0.9990.
>> Safe
>> > mode will be turned off automatically in 29 seconds.
>> > 2012-07-13 13:22:29,375 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK*
>> > NameSystem.processReport: from , blocks: 3128, processing time: 4 msecs
>> > 2012-07-13 13:31:29,201 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
>> > NameSystem.processReport: discarded non-initial block report from because
>> > namenode still in startup phase
>> >
>> > Any help would be greatly appreciated.
>> >
>> > Best,
>> >
>> > Juan
>> >
>>


detect if splittable compression is working?

2012-07-14 Thread Denny Lee
Out of curiosity, what's the best way to detect if splits of compression is
working?  The reason I ask is that we're testing out different compression
settings of LZO, gzip, RAW, BZ2, and Snappy and I'm getting some very hinke
results.  To help debug, just wanted to find out if there is an easy way if
the split of compressed files is working.

Thanks!
Denny


Re: detect if splittable compression is working?

2012-07-14 Thread Tim Broberg
What version of Hadoop are you using?

- Tim.

On Jul 14, 2012, at 1:20 PM, "Denny Lee"  wrote:

> Out of curiosity, what's the best way to detect if splits of compression is
> working?  The reason I ask is that we're testing out different compression
> settings of LZO, gzip, RAW, BZ2, and Snappy and I'm getting some very hinke
> results.  To help debug, just wanted to find out if there is an easy way if
> the split of compressed files is working.
>
> Thanks!
> Denny

The information contained in this email is intended only for the personal and 
confidential use of the recipient(s) named above.  The information and any 
attached documents contained in this message may be Exar confidential and/or 
legally privileged.  If you are not the intended recipient, you are hereby 
notified that any review, use, dissemination or reproduction of this message is 
strictly prohibited and may be unlawful.  If you have received this 
communication in error, please notify us immediately by return email and delete 
the original message.