What errors are you seeing in your hadoop-namenode and datanode logs?

Dennis Kubes

cybercouf wrote:
> Yes it is.
> 
> Here more details:
> 
> $ cat /etc/hosts
> 127.0.0.1       localhost
> 84.x.x.x    myhostname.mydomain.com myhostname
> 
> # ping myhostname
> PING myhostname.mydomain.com (84.x.x.x) 56(84) bytes of data.
> 64 bytes from myhostname.mydomain.com (84.x.x.x): icmp_seq=1 ttl=64
> time=0.017 ms
> 
> and when i do start-all.sh, the namenode seems running:
> # netstat -tupl
> Active Internet connections (only servers)
> Proto Recv-Q Send-Q Local Address           Foreign Address         State     
>  
> PID/Program name
> tcp6       0      0 *:50070                 *:*                     LISTEN    
> 18241/java
> tcp6       0      0 *:ssh                   *:*                     LISTEN    
> 3350/sshd
> tcp6       0      0 *:50010                 *:*                     LISTEN    
> 18279/java
> 
> and also I noticed that my nutch user (from who i launch all the scripts) is
> not allowed to ping
> nutch:~/search$ ping myhostname
> ping: icmp open socket: Operation not permitted
> 
> but that shouldn't be linked to the fact to failed to open a java socket?
> (java.net.ConnectException: Connection refused)
> 
> thanks for your help!
> 
> 
> Dennis Kubes wrote:
>> Make sure your hosts file on your namenode is setup correctly:
>>
>> 127.0.0.1               localhost.localdomain localhost
>> 10.x.x.x             myhostname.mydomain.com myhostname
>>
>> As opposed to:
>>
>> 127.0.0.1               localhost.localdomain localhost 
>> myhostname.mydomain.com myhostname
>>
>> The problem may be that the machine is listening on only the local 
>> interface.  If you do a ping myhostname from the local box you should 
>> receive the real IP and not the loopback address.
>>
>> Let me know if this was the problem or if you need more help.
>>
>> Dennis Kubes
>>
>> cybercouf wrote:
>>> I'm trying to setup hadoop using these guides:
>>> http://wiki.apache.org/nutch/NutchHadoopTutorial and
>>>
>> http://www.nabble.com/Nutch-Step-by-Step-Maybe-someone-will-find-this-useful---tf3526281.html
>>> But i'm stuck at the early step: having a single machine running.
>>> using nutch 0.8.1 and so the provided hadoop "hadoop-0.4.0-patched.jar"
>>> JVM sun 1.5.0_11
>>>
>>> When I start the namenode (using ./bin/start-all.sh) I have this in the
>>> namenode-log:
>>>
>>> 2007-05-02 12:39:51,335 INFO  util.Credential - Checking Resource
>> aliases
>>> 2007-05-02 12:39:51,349 INFO  http.HttpServer - Version Jetty/5.1.4
>>> 2007-05-02 12:39:51,350 WARN  servlet.WebApplicationContext - Web
>>> application not found
>>>
>> /home/nutch/search/file:/home/nutch/search/lib/hadoop-0.4.0-patched.jar!/webapps/dfs
>>> 2007-05-02 12:39:51,351 WARN  servlet.WebApplicationContext -
>> Configuration
>>> error on
>>>
>> /home/nutch/search/file:/home/nutch/search/lib/hadoop-0.4.0-patched.jar!/webapps/dfs
>>> java.io.FileNotFoundException:
>>>
>> /home/nutch/search/file:/home/nutch/search/lib/hadoop-0.4.0-patched.jar!/webapps/dfs
>>>     at
>>>
>> org.mortbay.jetty.servlet.WebApplicationContext.resolveWebApp(WebApplicationContext.java:266)
>>>     at
>>>
>> org.mortbay.jetty.servlet.WebApplicationContext.doStart(WebApplicationContext.java:449)
>>>     at org.mortbay.util.Container.start(Container.java:72)
>>>     at org.mortbay.http.HttpServer.doStart(HttpServer.java:753)
>>>     at org.mortbay.util.Container.start(Container.java:72)
>>>     at
>>>
>> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:138)
>>>     at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:173)
>>>     at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:91)
>>>     at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:82)
>>>     at org.apache.hadoop.dfs.NameNode.main(NameNode.java:491)
>>> 2007-05-02 12:39:51,353 INFO  util.Container - Started
>>> HttpContext[/logs,/logs]
>>> 2007-05-02 12:39:51,353 INFO  util.Container - Started
>>> HttpContext[/static,/static]
>>> 2007-05-02 12:39:51,357 INFO  http.SocketListener - Started
>> SocketListener
>>> on 0.0.0.0:50070
>>>
>>> and after I can't access it:
>>> $ ./bin/hadoop dfs -ls
>>> ls: Connection refused
>>>
>>> hadoop.log:
>>> 2007-05-02 12:41:40,030 WARN  fs.DFSClient - Problem renewing lease for
>>> DFSClient_2015604182: java.net.ConnectException: Connection refused
>>>     at java.net.PlainSocketImpl.socketConnect(Native Method) 
>>> [...]
>>>
>>>
>>>
>>> 1. I can't understant why there is this FileNotFound execption, I didn't
>>> change anything in the hadoop nutch jar file.
>>>
>>> 2. It looks like the namenode is running (when I stop it I have the
>> message
>>> "stopping namenode"), but why I can't access it ? (is this ip from the
>> log
>>> correct? 0.0.0.0:50070)
>>> all is on the same machine, and my conf file looks ok:
>>> fs.default.name   myhostname:9000
>>> mapred.job.tracker  myhostname:9001
>>> mapred.map.tasks  2
>>> mapred.reduce.tasks  2
>>> dfs.name.dir  /home/nutch/filesystem/name
>>> dfs.data.dir  /home/nutch/filesystem/data
>>> mapred.system.dir  /home/nutch/filesystem/mapreduce/system
>>> mapred.local.dir  /home/nutch/filesystem/mapreduce/local
>>> dfs.replication  1
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to