Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

2011-05-26 Thread DAN
Hi, Richard

Pay attention to "Not able to place enough replicas, still in need of 1". Pls 
confirm right
setting of "dfs.replication" in hdfs-site.xml.

Good luck!
Dan

--



At 2011-05-27 08:01:37,"Xu, Richard "  wrote:

>Hi Folks,
>
>We try to get hbase and hadoop running on clusters, take 2 Solaris servers for 
>now.
>
>Because of the incompatibility issue between hbase and hadoop, we have to 
>stick with hadoop 0.20.2-append release.
>
>It is very straight forward to make hadoop-0.20.203 running, but stuck for 
>several days with hadoop-0.20.2, even the official release, not the append 
>version.
>
>1. Once try to run start-mapred.sh(hadoop-daemon.sh --config $HADOOP_CONF_DIR 
>start jobtracker), following errors shown in namenode and jobtracker logs:
>
>2011-05-26 12:30:29,169 WARN 
>org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough 
>replicas, still in need of 1
>2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>4 on 9000, call addBlock(/tmp/hadoop-cfadm/mapred/system/jobtracker.info, DFSCl
>ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException: File 
>/tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be replicated to 0 n
>odes, instead of 1
>java.io.IOException: File /tmp/hadoop-cfadm/mapred/system/jobtracker.info 
>could only be replicated to 0 nodes, instead of 1
>at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>at java.lang.reflect.Method.invoke(Method.java:597)
>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:396)
>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>
>2. Also, Configured Capacity is 0, cannot put any file to HDFS.
>
>3. in datanode server, no error in logs, but tasktracker logs has the 
>following suspicious thing:
>2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server 
>Responder: starting
>2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
>on 41904: starting
>2011-05-25 23:36:10,852 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>0 on 41904: starting
>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>1 on 41904: starting
>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>2 on 41904: starting
>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>3 on 41904: starting
>.
>2011-05-25 23:36:10,855 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>63 on 41904: starting
>2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker 
>up at: localhost/127.0.0.1:41904
>2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: Starting 
>tracker tracker_loanps3d:localhost/127.0.0.1:41904
>
>
>I have tried all suggestions found so far, including
> 1) remove hadoop-name and hadoop-data folders and reformat namenode;
> 2) clean up all temp files/folders under /tmp;
>
>But nothing works.
>
>Your help is greatly appreciated.
>
>Thanks,
>
>RX


RE: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

2011-05-27 Thread Xu, Richard
That setting is 3.

From: DAN [mailto:chaidong...@163.com]
Sent: Thursday, May 26, 2011 10:23 PM
To: common-user@hadoop.apache.org; Xu, Richard [ICG-IT]
Subject: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 
cluster

Hi, Richard

Pay attention to "Not able to place enough replicas, still in need of 1". Pls 
confirm right
setting of "dfs.replication" in hdfs-site.xml.

Good luck!
Dan
--


At 2011-05-27 08:01:37,"Xu, Richard " 
mailto:richard...@citi.com>> wrote:



>Hi Folks,

>

>We try to get hbase and hadoop running on clusters, take 2 Solaris servers for 
>now.

>

>Because of the incompatibility issue between hbase and hadoop, we have to 
>stick with hadoop 0.20.2-append release.

>

>It is very straight forward to make hadoop-0.20.203 running, but stuck for 
>several days with hadoop-0.20.2, even the official release, not the append 
>version.

>

>1. Once try to run start-mapred.sh(hadoop-daemon.sh --config $HADOOP_CONF_DIR 
>start jobtracker), following errors shown in namenode and jobtracker logs:

>

>2011-05-26 12:30:29,169 WARN 
>org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough 
>replicas, still in need of 1

>2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>4 on 9000, call addBlock(/tmp/hadoop-cfadm/mapred/system/jobtracker.info, DFSCl

>ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException: File 
>/tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be replicated to 0 n

>odes, instead of 1

>java.io.IOException: File /tmp/hadoop-cfadm/mapred/system/jobtracker.info 
>could only be replicated to 0 nodes, instead of 1

>at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)

>at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)

>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

>at java.lang.reflect.Method.invoke(Method.java:597)

>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)

>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)

>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)

>at java.security.AccessController.doPrivileged(Native Method)

>at javax.security.auth.Subject.doAs(Subject.java:396)

>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

>

>

>2. Also, Configured Capacity is 0, cannot put any file to HDFS.

>

>3. in datanode server, no error in logs, but tasktracker logs has the 
>following suspicious thing:

>2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server 
>Responder: starting

>2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
>on 41904: starting

>2011-05-25 23:36:10,852 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>0 on 41904: starting

>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>1 on 41904: starting

>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>2 on 41904: starting

>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>3 on 41904: starting

>.

>2011-05-25 23:36:10,855 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>63 on 41904: starting

>2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker 
>up at: localhost/127.0.0.1:41904

>2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: Starting 
>tracker tracker_loanps3d:localhost/127.0.0.1:41904

>

>

>I have tried all suggestions found so far, including

> 1) remove hadoop-name and hadoop-data folders and reformat namenode;

> 2) clean up all temp files/folders under /tmp;

>

>But nothing works.

>

>Your help is greatly appreciated.

>

>Thanks,

>

>RX



Re: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

2011-05-27 Thread Simon
First you need to make sure that your dfs daemons are running.
You can start you namenode and datanode separately on the master and slave
nodes, and see what happens with the following commands:

hadoop namenode
hadoop datanode

The chancess are that your data node can not be started correctly.
Let us know your error logs if there are errors.

HTH~

Thanks
Simon

2011/5/27 Xu, Richard 

> That setting is 3.
>
> From: DAN [mailto:chaidong...@163.com]
> Sent: Thursday, May 26, 2011 10:23 PM
> To: common-user@hadoop.apache.org; Xu, Richard [ICG-IT]
> Subject: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203
> cluster
>
> Hi, Richard
>
> Pay attention to "Not able to place enough replicas, still in need of 1".
> Pls confirm right
> setting of "dfs.replication" in hdfs-site.xml.
>
> Good luck!
> Dan
> --
>
>
> At 2011-05-27 08:01:37,"Xu, Richard "  richard...@citi.com>> wrote:
>
>
>
> >Hi Folks,
>
> >
>
> >We try to get hbase and hadoop running on clusters, take 2 Solaris servers
> for now.
>
> >
>
> >Because of the incompatibility issue between hbase and hadoop, we have to
> stick with hadoop 0.20.2-append release.
>
> >
>
> >It is very straight forward to make hadoop-0.20.203 running, but stuck for
> several days with hadoop-0.20.2, even the official release, not the append
> version.
>
> >
>
> >1. Once try to run start-mapred.sh(hadoop-daemon.sh --config
> $HADOOP_CONF_DIR start jobtracker), following errors shown in namenode and
> jobtracker logs:
>
> >
>
> >2011-05-26 12:30:29,169 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place
> enough replicas, still in need of 1
>
> >2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 9000, call addBlock(/tmp/hadoop-cfadm/mapred/system/
> jobtracker.info, DFSCl
>
> >ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException:
> File /tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be
> replicated to 0 n
>
> >odes, instead of 1
>
> >java.io.IOException: File 
> >/tmp/hadoop-cfadm/mapred/system/jobtracker.infocould only be replicated to 0 
> >nodes, instead of 1
>
> >at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>
> >at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> >at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
> >at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
> >at java.lang.reflect.Method.invoke(Method.java:597)
>
> >at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>
> >at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>
> >at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>
> >at java.security.AccessController.doPrivileged(Native Method)
>
> >at javax.security.auth.Subject.doAs(Subject.java:396)
>
> >at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> >
>
> >
>
> >2. Also, Configured Capacity is 0, cannot put any file to HDFS.
>
> >
>
> >3. in datanode server, no error in logs, but tasktracker logs has the
> following suspicious thing:
>
> >2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
>
> >2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 41904: starting
>
> >2011-05-25 23:36:10,852 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 41904: starting
>
> >2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 41904: starting
>
> >2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 41904: starting
>
> >2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 41904: starting
>
> >.
>
> >2011-05-25 23:36:10,855 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 63 on 41904: starting
>
> >2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker:
> TaskTracker up at: localhost/127.0.0.1:41904
>
> >2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker:
> Starting tracker tracker_loanps3d:localhost/127.0.0.1:41904
>
> >
>
> >
>
> >I have tried all suggestions found so far, including
>
> > 1) remove hadoop-name and hadoop-data folders and reformat namenode;
>
> > 2) clean up all temp files/folders under /tmp;
>
> >
>
> >But nothing works.
>
> >
>
> >Your help is greatly appreciated.
>
> >
>
> >Thanks,
>
> >
>
> >RX
>
>


-- 
Regards,
Simon


Re:RE: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

2011-05-27 Thread DAN
Hi, Richard

You see you have "2 Solaris servers for now", and dfs.replication is setted as 
3.
These don't match.

Good Luck
Dan


At 2011-05-27 19:34:10,"Xu, Richard "  wrote:

>That setting is 3.
>
>From: DAN [mailto:chaidong...@163.com]
>Sent: Thursday, May 26, 2011 10:23 PM
>To: common-user@hadoop.apache.org; Xu, Richard [ICG-IT]
>Subject: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 
>cluster
>
>Hi, Richard
>
>Pay attention to "Not able to place enough replicas, still in need of 1". Pls 
>confirm right
>setting of "dfs.replication" in hdfs-site.xml.
>
>Good luck!
>Dan
>--
>
>
>At 2011-05-27 08:01:37,"Xu, Richard " 
>mailto:richard...@citi.com>> wrote:
>
>
>
>>Hi Folks,
>
>>
>
>>We try to get hbase and hadoop running on clusters, take 2 Solaris servers 
>>for now.
>
>>
>
>>Because of the incompatibility issue between hbase and hadoop, we have to 
>>stick with hadoop 0.20.2-append release.
>
>>
>
>>It is very straight forward to make hadoop-0.20.203 running, but stuck for 
>>several days with hadoop-0.20.2, even the official release, not the append 
>>version.
>
>>
>
>>1. Once try to run start-mapred.sh(hadoop-daemon.sh --config $HADOOP_CONF_DIR 
>>start jobtracker), following errors shown in namenode and jobtracker logs:
>
>>
>
>>2011-05-26 12:30:29,169 WARN 
>>org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough 
>>replicas, still in need of 1
>
>>2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>>4 on 9000, call addBlock(/tmp/hadoop-cfadm/mapred/system/jobtracker.info, 
>>DFSCl
>
>>ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException: File 
>>/tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be replicated to 0 
>>n
>
>>odes, instead of 1
>
>>java.io.IOException: File /tmp/hadoop-cfadm/mapred/system/jobtracker.info 
>>could only be replicated to 0 nodes, instead of 1
>
>>at 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>
>>at 
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>
>>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>>at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
>>at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
>>at java.lang.reflect.Method.invoke(Method.java:597)
>
>>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>
>>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>
>>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>
>>at java.security.AccessController.doPrivileged(Native Method)
>
>>at javax.security.auth.Subject.doAs(Subject.java:396)
>
>>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>>
>
>>
>
>>2. Also, Configured Capacity is 0, cannot put any file to HDFS.
>
>>
>
>>3. in datanode server, no error in logs, but tasktracker logs has the 
>>following suspicious thing:
>
>>2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server 
>>Responder: starting
>
>>2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server 
>>listener on 41904: starting
>
>>2011-05-25 23:36:10,852 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>>0 on 41904: starting
>
>>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>>1 on 41904: starting
>
>>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>>2 on 41904: starting
>
>>2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>>3 on 41904: starting
>
>>.
>
>>2011-05-25 23:36:10,855 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>>63 on 41904: starting
>
>>2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: 
>>TaskTracker up at: localhost/127.0.0.1:41904
>
>>2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: Starting 
>>tracker tracker_loanps3d:localhost/127.0.0.1:41904
>
>>
>
>>
>
>>I have tried all suggestions found so far, including
>
>> 1) remove hadoop-name and hadoop-data folders and reformat namenode;
>
>> 2) clean up all temp files/folders under /tmp;
>
>>
>
>>But nothing works.
>
>>
>
>>Your help is greatly appreciated.
>
>>
>
>>Thanks,
>
>>
>
>>RX
>