Re: start-all.sh not work in hadoop-0.20.0

2009-06-02 Thread Nick Cen
Today i give the start-all.sh another try, althought the same exception is
thrown but it works this time. So i think maybe the start-all.sh is not that
stable.

2009/6/2 Aaron Kimball aa...@cloudera.com

 This exception is logged with a severity level of INFO. I think this is a
 relatively common exception. The JobTracker should just wait until the DFS
 exits safemode and then clearing the system directory will proceed as
 usual.
 So I don't think that this is killing your JobTracker.

 Can you confirm that the JobTracker process, in fact, does die? If so,
 there's probably an exception lower down in the log marked with a severity
 level of ERROR or FATAL -- do you see any of those?

 - Aaron


 On Mon, Jun 1, 2009 at 7:22 AM, Nick Cen cenyo...@gmail.com wrote:

  Hi All,
 
  I can start the hadoop by typing start-dfs.sh and start-mapred.sh. But
 when
  i use the start-all.sh only the hdfs is start. The  jobtracker's log
  indicate there is an exception when starting the job-tracker.
 
  2009-06-01 22:06:59,675 INFO org.apache.hadoop.mapred.JobTracker: problem
  cleaning system directory:
  hdfs://localhost:54310/tmp/hadoop-nick/mapred/system
  org.apache.hadoop.ipc.RemoteException:
  org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
  /tmp/hadoop-nick/mapred/system. Name node is in safe mode.
  The ratio of reported blocks 0. has not reached the threshold 0.9990.
  Safe mode will be turned off automatically.
 at
 
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1681)
 at
 
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1661)
 at
  org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
 
 at org.apache.hadoop.ipc.Client.call(Client.java:739)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
 at $Proxy4.delete(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at
 
 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
 at
 
 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
 at $Proxy4.delete(Unknown Source)
 at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:550)
 at
 
 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
 at
 org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1637)
 at
  org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:174)
 at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3528)
 
 
  is this exception cause by no time delay between the start-dfs.sh and the
  start-mapred.sh script? Thanks.
 
 
  --
  http://daily.appspot.com/food/
 




-- 
http://daily.appspot.com/food/


start-all.sh not work in hadoop-0.20.0

2009-06-01 Thread Nick Cen
Hi All,

I can start the hadoop by typing start-dfs.sh and start-mapred.sh. But when
i use the start-all.sh only the hdfs is start. The  jobtracker's log
indicate there is an exception when starting the job-tracker.

2009-06-01 22:06:59,675 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory:
hdfs://localhost:54310/tmp/hadoop-nick/mapred/system
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
/tmp/hadoop-nick/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0. has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1681)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1661)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.delete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:550)
at
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1637)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:174)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3528)


is this exception cause by no time delay between the start-dfs.sh and the
start-mapred.sh script? Thanks.


-- 
http://daily.appspot.com/food/


Re: start-all.sh not work in hadoop-0.20.0

2009-06-01 Thread Aaron Kimball
This exception is logged with a severity level of INFO. I think this is a
relatively common exception. The JobTracker should just wait until the DFS
exits safemode and then clearing the system directory will proceed as usual.
So I don't think that this is killing your JobTracker.

Can you confirm that the JobTracker process, in fact, does die? If so,
there's probably an exception lower down in the log marked with a severity
level of ERROR or FATAL -- do you see any of those?

- Aaron


On Mon, Jun 1, 2009 at 7:22 AM, Nick Cen cenyo...@gmail.com wrote:

 Hi All,

 I can start the hadoop by typing start-dfs.sh and start-mapred.sh. But when
 i use the start-all.sh only the hdfs is start. The  jobtracker's log
 indicate there is an exception when starting the job-tracker.

 2009-06-01 22:06:59,675 INFO org.apache.hadoop.mapred.JobTracker: problem
 cleaning system directory:
 hdfs://localhost:54310/tmp/hadoop-nick/mapred/system
 org.apache.hadoop.ipc.RemoteException:
 org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
 /tmp/hadoop-nick/mapred/system. Name node is in safe mode.
 The ratio of reported blocks 0. has not reached the threshold 0.9990.
 Safe mode will be turned off automatically.
at

 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1681)
at

 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1661)
at
 org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.delete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at

 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at

 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:550)
at

 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1637)
at
 org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:174)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3528)


 is this exception cause by no time delay between the start-dfs.sh and the
 start-mapred.sh script? Thanks.


 --
 http://daily.appspot.com/food/