Hi, 

    I guess that something about "threshold 0.9990". When HDFS start up,
it come in safe mode first, then check a value(I don't know what value or 
percent?) 
of my hadoop,and fine the value below 99.9%,  so the safe mode will not turn 
off?
 
but the conclusion of the log file is "Safe mode will be turned off 
automatically"?

I'm lost.
___________________________________________________
 2011-04-08 11:58:21,036 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
mode ON.
>>> The reported blocks 0 needs additional 2 blocks to reach the threshold 
>>> 0.9990 of total blocks 3. Safe mode will be turned off automatically.
________________________________________________________________________

----- Original Message ----- 
From: "springring" <springr...@126.com>
To: <common-user@hadoop.apache.org>
Sent: Friday, April 08, 2011 2:20 PM
Subject: Fw: start-up with safe mode?


> 
> 
>> 
>>> Hi,
>>> 
>>>   When I start up hadoop, the namenode log show "STATE* Safe mode ON" like 
>>> that , how to set it off?
>>     I can set it off with command "hadoop fs -dfsadmin leave" after start 
>> up, but how can I just start HDFS
>>     out of Safe mode? 
>>>   Thanks.
>>> 
>>> Ring
>>> 
>>> the startup 
>>> log________________________________________________________________
>>> 
>>> 2011-04-08 11:58:20,655 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
>>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>>> 2011-04-08 11:58:20,657 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: 
>>> Initializing NameNodeMeterics using context 
>>> object:org.apache.hadoop.metrics.spi.NullContext
>>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: VM type      
>>>  = 32-bit
>>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: 2% max 
>>> memory = 17.77875 MB
>>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: capacity     
>>>  = 2^22 = 4194304 entries
>>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: 
>>> recommended=4194304, actual=4194304
>>> 2011-04-08 11:58:20,697 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hdfs
>>> 2011-04-08 11:58:20,697 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>> 2011-04-08 11:58:20,697 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
>>> isPermissionEnabled=true
>>> 2011-04-08 11:58:20,701 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
>>> dfs.block.invalidate.limit=1000
>>> 2011-04-08 11:58:20,701 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), 
>>> accessTokenLifetime=0 min(s)
>>> 2011-04-08 11:58:20,976 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: 
>>> Initializing FSNamesystemMetrics using context 
>>> object:org.apache.hadoop.metrics.spi.NullContext
>>> 2011-04-08 11:58:21,001 INFO org.apache.hadoop.hdfs.server.common.Storage: 
>>> Number of files = 17
>>> 2011-04-08 11:58:21,007 INFO org.apache.hadoop.hdfs.server.common.Storage: 
>>> Number of files under construction = 0
>>> 2011-04-08 11:58:21,007 INFO org.apache.hadoop.hdfs.server.common.Storage: 
>>> Image file of size 1529 loaded in 0 seconds.
>>> 2011-04-08 11:58:21,007 INFO org.apache.hadoop.hdfs.server.common.Storage: 
>>> Edits file /tmp/hadoop-hdfs/dfs/name/current/edits of size 4 edits # 0 
>>> loaded in 0 seconds.
>>> 2011-04-08 11:58:21,009 INFO org.apache.hadoop.hdfs.server.common.Storage: 
>>> Image file of size 1529 saved in 0 seconds.
>>> 2011-04-08 11:58:21,022 INFO org.apache.hadoop.hdfs.server.common.Storage: 
>>> Image file of size 1529 saved in 0 seconds.
>>> 2011-04-08 11:58:21,032 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading 
>>> FSImage in 339 msecs
>>> 2011-04-08 11:58:21,036 INFO org.apache.hadoop.hdfs.StateChange: STATE* 
>>> Safe mode ON.
>>> The reported blocks 0 needs additional 2 blocks to reach the threshold 
>>> 0.9990 of total blocks 3. Safe mode will be turned off automatically.
>>>

Reply via email to