Ynt: Re: Ynt: Re: Cannot access Jobtracker and namenode

2009-04-13 Thread halilibrahimcakir
racks and 0 datanodes
2009-04-12 23:17:28,935 INFO org.apache.hadoop.hdfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2009-04-12 23:17:40,285 INFO org.mortbay.http.HttpServer: Version
Jetty/5.1.4
2009-04-12 23:17:40,315 INFO org.mortbay.util.Credential: Checking
Resource aliases
2009-04-12 23:17:41,173 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.webapplicationhand...@180f96c
2009-04-12 23:17:41,331 INFO org.mortbay.util.Container: Started
WebApplicationContext[/static,/static]
2009-04-12 23:17:41,722 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.webapplicationhand...@f18e8e
2009-04-12 23:17:41,725 INFO org.mortbay.util.Container: Started
WebApplicationContext[/logs,/logs]
2009-04-12 23:17:42,090 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.webapplicationhand...@19d75ee
2009-04-12 23:17:42,100 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2009-04-12 23:17:42,106 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:50070
2009-04-12 23:17:42,109 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.ser...@ce2187
2009-04-12 23:17:42,109 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Web-server up at:
0.0.0.0:50070
2009-04-12 23:17:42,109 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2009-04-12 23:17:42,113 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 9000: starting
2009-04-12 23:17:42,132 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 9000: starting
2009-04-12 23:17:42,133 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 9000: starting
2009-04-12 23:17:42,134 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 9000: starting
2009-04-12 23:17:42,135 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 9000: starting
2009-04-12 23:17:42,135 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 9000: starting
2009-04-12 23:17:42,136 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 9000: starting
2009-04-12 23:17:42,137 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 9000: starting
2009-04-12 23:17:42,137 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 7 on 9000: starting
2009-04-12 23:17:42,138 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 8 on 9000: starting
2009-04-12 23:17:42,139 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 9000: starting
2009-04-12 23:17:47,353 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=root,root    ip=/127.0.0.1   
cmd=listStatus   
src=/tmp/hadoop-root/mapred/system   
dst=null    perm=null
2009-04-12 23:17:47,382 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=root,root    ip=/127.0.0.1   
cmd=mkdirs   
src=/tmp/hadoop-root/mapred/system   
dst=null    perm=root:supergroup:rwxr-xr-x
2009-04-12 23:17:47,389 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=root,root    ip=/127.0.0.1   
cmd=setPermission   
src=/tmp/hadoop-root/mapred/system   
dst=null    perm=root:supergroup:rwx-wx-wx
2009-04-12 23:18:12,438 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.registerDatanode: node registration from 127.0.0.1:50010
storage DS-1295480608-127.0.1.1-50010-1239567492422
2009-04-12 23:18:12,446 INFO org.apache.hadoop.net.NetworkTopology: Adding
a new node: /default-rack/127.0.0.1:50010
2009-04-12 23:22:59,365 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
127.0.0.1
2009-04-12 23:22:59,366 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
transactions: 5 Total time for transactions(ms): 0 Number of syncs: 3
SyncTimes(ms): 8 
2009-04-12 23:23:00,294 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from
127.0.0.1
2009-04-12 23:23:00,295 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
transactions: 0 Total time for transactions(ms): 0 Number of syncs: 1
SyncTimes(ms): 5 
2009-04-12 23:58:24,184 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/
SHUTDOWN_MSG: Shutting down NameNode at debian/127.0.1.1
/

Thanks

- Özgün İleti -
Kimden : core-user@hadoop.apache.org
Kime : core-user@hadoop.apache.org
Gönderme tarihi : 13/04/2009 9:26
Konu : Re: Ynt: Re: Cannot access Jobtracker and namenode
It's normal that they are all empty. Look at files with ".log" extension.

12 Nisan 2009 Pazar 23:30 tarihinde halilibrahimcakir
<halilibrahimca...@mynet.com>
yazdı:
> I followed these steps:
>
> $ bin/stop-all.sh
> $ rm -ri /tmp/hadoop-root
> $ bin/hadoop namenode -format
> $ bin/start-all.sh
>
> and looked "localhost:50070" and "localhost:50030" in my browser that
the
> result was not different. Again "Error 404". I looked these files:
>
> $ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out1
> $

Re: Ynt: Re: Cannot access Jobtracker and namenode

2009-04-12 Thread Rasit OZDAS
It's normal that they are all empty. Look at files with ".log" extension.

12 Nisan 2009 Pazar 23:30 tarihinde halilibrahimcakir
 yazdı:
> I followed these steps:
>
> $ bin/stop-all.sh
> $ rm -ri /tmp/hadoop-root
> $ bin/hadoop namenode -format
> $ bin/start-all.sh
>
> and looked "localhost:50070" and "localhost:50030" in my browser that the
> result was not different. Again "Error 404". I looked these files:
>
> $ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out1
> $ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out2
> $ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out3
> $ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out4
>
> 4th file is the last one related to namenode logs in the logs directory.
> All of them are empty. I don't understand what is wrong.
>
> - Özgün İleti -
> Kimden : core-user@hadoop.apache.org
> Kime : core-user@hadoop.apache.org
> Gönderme tarihi : 12/04/2009 22:56
> Konu : Re: Cannot access Jobtracker and namenode
> Try looking at namenode logs (under "logs" directory). There should be
> an exception. Paste it here if you don't understand what it means.
>
> 12 Nisan 2009 Pazar 22:22 tarihinde halilibrahimcakir
> <halilibrahimca...@mynet.com>
> yazdı:
> > I typed:
> >
> > $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> > $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
> >
> > Deleted this directory:
> >
> > $ rm -ri /tmp/hadoop-root
> >
> > Formatted namenode again:
> >
> > $ /bin/hadoop namenode -format
> >
> > Stopped:
> >
> > $ /bin/stop-all.sh
> >
> >
> > then typed:
> >
> >
> >
> > $ ssh localhost
> >
> > so it didn't want me to enter a password. I started:
> >
> > $ /bin/start-all.sh
> >
> > But nothing changed :(
> >
> > - Özgün İleti -
> > Kimden : core-user@hadoop.apache.org
> > Kime : core-user@hadoop.apache.org
> > Gönderme tarihi : 12/04/2009 21:33
> > Konu : Re: Ynt: Re: Cannot access Jobtracker and namenode
> > There are two commands in hadoop quick start, used for passwordless
> ssh.
> > Try those.
> >
> > $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> > $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
> >
> > http://hadoop.apache.org/core/docs/current/quickstart.html
> >
> > --
> > M. Raşit ÖZDAŞ
> >
> > Halil İbrahim ÇAKIR
> >
> > Dumlupınar Üniversitesi Bilgisayar Mühendisliği
> >
> > http://cakirhal.blogspot.com
> >
> >
>
>
>
> --
> M. Raşit ÖZDAŞ
>
> Halil İbrahim ÇAKIR
>
> Dumlupınar Üniversitesi Bilgisayar Mühendisliği
>
> http://cakirhal.blogspot.com
>
>



-- 
M. Raşit ÖZDAŞ


Ynt: Re: Cannot access Jobtracker and namenode

2009-04-12 Thread halilibrahimcakir
I followed these steps:

$ bin/stop-all.sh
$ rm -ri /tmp/hadoop-root
$ bin/hadoop namenode -format
$ bin/start-all.sh

and looked "localhost:50070" and "localhost:50030" in my browser that the
result was not different. Again "Error 404". I looked these files:

$ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out1
$ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out2
$ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out3
$ gedit hadoop-0.19.0/logs/hadoop-root-namenode-debian.out4

4th file is the last one related to namenode logs in the logs directory.
All of them are empty. I don't understand what is wrong. 

- Özgün İleti -
Kimden : core-user@hadoop.apache.org
Kime : core-user@hadoop.apache.org
Gönderme tarihi : 12/04/2009 22:56
Konu : Re: Cannot access Jobtracker and namenode
Try looking at namenode logs (under "logs" directory). There should be
an exception. Paste it here if you don't understand what it means.

12 Nisan 2009 Pazar 22:22 tarihinde halilibrahimcakir
<halilibrahimca...@mynet.com>
yazdı:
> I typed:
>
> $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>
> Deleted this directory:
>
> $ rm -ri /tmp/hadoop-root
>
> Formatted namenode again:
>
> $ /bin/hadoop namenode -format
>
> Stopped:
>
> $ /bin/stop-all.sh
>
>
> then typed:
>
>
>
> $ ssh localhost
>
> so it didn't want me to enter a password. I started:
>
> $ /bin/start-all.sh
>
> But nothing changed :(
>
> ----- Özgün İleti -
> Kimden : core-user@hadoop.apache.org
> Kime : core-user@hadoop.apache.org
> Gönderme tarihi : 12/04/2009 21:33
> Konu : Re: Ynt: Re: Cannot access Jobtracker and namenode
> There are two commands in hadoop quick start, used for passwordless
ssh.
> Try those.
>
> $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>
> http://hadoop.apache.org/core/docs/current/quickstart.html
>
> --
> M. Raşit ÖZDAŞ
>
> Halil İbrahim ÇAKIR
>
> Dumlupınar Üniversitesi Bilgisayar Mühendisliği
>
> http://cakirhal.blogspot.com
>
>



-- 
M. Raşit ÖZDAŞ
 
Halil İbrahim ÇAKIR

Dumlupınar Üniversitesi Bilgisayar Mühendisliği 

http://cakirhal.blogspot.com 



Re: Cannot access Jobtracker and namenode

2009-04-12 Thread Rasit OZDAS
Try looking at namenode logs (under "logs" directory). There should be
an exception. Paste it here if you don't understand what it means.

12 Nisan 2009 Pazar 22:22 tarihinde halilibrahimcakir
 yazdı:
> I typed:
>
> $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>
> Deleted this directory:
>
> $ rm -ri /tmp/hadoop-root
>
> Formatted namenode again:
>
> $ /bin/hadoop namenode -format
>
> Stopped:
>
> $ /bin/stop-all.sh
>
>
> then typed:
>
>
>
> $ ssh localhost
>
> so it didn't want me to enter a password. I started:
>
> $ /bin/start-all.sh
>
> But nothing changed :(
>
> - Özgün İleti -
> Kimden : core-user@hadoop.apache.org
> Kime : core-user@hadoop.apache.org
> Gönderme tarihi : 12/04/2009 21:33
> Konu : Re: Ynt: Re: Cannot access Jobtracker and namenode
> There are two commands in hadoop quick start, used for passwordless ssh.
> Try those.
>
> $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>
> http://hadoop.apache.org/core/docs/current/quickstart.html
>
> --
> M. Raşit ÖZDAŞ
>
> Halil İbrahim ÇAKIR
>
> Dumlupınar Üniversitesi Bilgisayar Mühendisliği
>
> http://cakirhal.blogspot.com
>
>



-- 
M. Raşit ÖZDAŞ


Re: Ynt: Re: Ynt: Re: Cannot access Jobtracker and namenode

2009-04-12 Thread Mithila Nagendra
You have to stop the cluster before you reformat. Also restarting the master
might help.
Mithila

2009/4/12 halilibrahimcakir 

> I typed:
>
> $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>
> Deleted this directory:
>
> $ rm -ri /tmp/hadoop-root
>
> Formatted namenode again:
>
> $ /bin/hadoop namenode -format
>
> Stopped:
>
> $ /bin/stop-all.sh
>
>
> then typed:
>
>
>
> $ ssh localhost
>
> so it didn't want me to enter a password. I started:
>
> $ /bin/start-all.sh
>
> But nothing changed :(
>
> - Özgün İleti -
> Kimden : core-user@hadoop.apache.org
> Kime : core-user@hadoop.apache.org
> Gönderme tarihi : 12/04/2009 21:33
> Konu : Re: Ynt: Re: Cannot access Jobtracker and namenode
> There are two commands in hadoop quick start, used for passwordless ssh.
> Try those.
>
> $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>
> http://hadoop.apache.org/core/docs/current/quickstart.html
>
> --
> M. Raşit ÖZDAŞ
>
> Halil İbrahim ÇAKIR
>
> Dumlupınar Üniversitesi Bilgisayar Mühendisliği
>
> http://cakirhal.blogspot.com
>
>


Ynt: Re: Ynt: Re: Cannot access Jobtracker and namenode

2009-04-12 Thread halilibrahimcakir
I typed:

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Deleted this directory:

$ rm -ri /tmp/hadoop-root

Formatted namenode again:

$ /bin/hadoop namenode -format

Stopped:

$ /bin/stop-all.sh


then typed:



$ ssh localhost

so it didn't want me to enter a password. I started:

$ /bin/start-all.sh

But nothing changed :(

- Özgün İleti -
Kimden : core-user@hadoop.apache.org
Kime : core-user@hadoop.apache.org
Gönderme tarihi : 12/04/2009 21:33
Konu : Re: Ynt: Re: Cannot access Jobtracker and namenode
There are two commands in hadoop quick start, used for passwordless ssh.
Try those.

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

http://hadoop.apache.org/core/docs/current/quickstart.html

-- 
M. Raşit ÖZDAŞ
 
Halil İbrahim ÇAKIR

Dumlupınar Üniversitesi Bilgisayar Mühendisliği 

http://cakirhal.blogspot.com 



Re: Ynt: Re: Cannot access Jobtracker and namenode

2009-04-12 Thread Rasit OZDAS
There are two commands in hadoop quick start, used for passwordless ssh.
Try those.

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

http://hadoop.apache.org/core/docs/current/quickstart.html

-- 
M. Raşit ÖZDAŞ


Ynt: Re: Cannot access Jobtracker and namenode

2009-04-12 Thread halilibrahimcakir

Yes, usually when I type "ssh localhost" in terminal. 


- Özgün İleti -
Kimden : core-user@hadoop.apache.org
Kime : core-user@hadoop.apache.org
Gönderme tarihi : 12/04/2009 20:58
Konu : Re: Cannot access Jobtracker and namenode
Does your system request a password when you ssh to localhost outside
hadoop?

12 Nisan 2009 Pazar 20:51 tarihinde halilibrahimcakir
<halilibrahimca...@mynet.com>
yazdı:
>
> Hi
>
> I am new at hadoop. I downloaded Hadoop-0.19.0 and followed the
> instructions in the quick start
> manual(http://hadoop.apache.org/core/docs/r0.19.1/quickstart.html).
When I
> came to Pseudo-Distributed Operation section there was no problem
but
> localhost:50070 and localhost:50030 couldn't be opened. It says
"localhost
> reffused the connection". I tried this in another machine, but it
says
> like "Http Error 404: /dfshealth.jsp ...". How can I see these pages
and
> continue using hadoop? Thanks.
>
> Additional Information:
>
> OS: Debian 5.0 (latest version)
> JDK: Sun-Java 1.6 (latest version)
>  rsync and ssh installed
> edited hadoop-site-xml properly
>
> Halil İbrahim ÇAKIR
>
> Dumlupınar Üniversitesi Bilgisayar Mühendisliği
>
> http://cakirhal.blogspot.com
>
>



-- 
M. Raşit ÖZDAŞ
 
Halil İbrahim ÇAKIR

Dumlupınar Üniversitesi Bilgisayar Mühendisliği 

http://cakirhal.blogspot.com 



Re: Cannot access Jobtracker and namenode

2009-04-12 Thread Rasit OZDAS
Does your system request a password when you ssh to localhost outside hadoop?

12 Nisan 2009 Pazar 20:51 tarihinde halilibrahimcakir
 yazdı:
>
> Hi
>
> I am new at hadoop. I downloaded Hadoop-0.19.0 and followed the
> instructions in the quick start
> manual(http://hadoop.apache.org/core/docs/r0.19.1/quickstart.html). When I
> came to Pseudo-Distributed Operation section there was no problem but
> localhost:50070 and localhost:50030 couldn't be opened. It says "localhost
> reffused the connection". I tried this in another machine, but it says
> like "Http Error 404: /dfshealth.jsp ...". How can I see these pages and
> continue using hadoop? Thanks.
>
> Additional Information:
>
> OS: Debian 5.0 (latest version)
> JDK: Sun-Java 1.6 (latest version)
>  rsync and ssh installed
> edited hadoop-site-xml properly
>
> Halil İbrahim ÇAKIR
>
> Dumlupınar Üniversitesi Bilgisayar Mühendisliği
>
> http://cakirhal.blogspot.com
>
>



-- 
M. Raşit ÖZDAŞ


Cannot access Jobtracker and namenode

2009-04-12 Thread halilibrahimcakir

Hi

I am new at hadoop. I downloaded Hadoop-0.19.0 and followed the
instructions in the quick start
manual(http://hadoop.apache.org/core/docs/r0.19.1/quickstart.html). When I
came to Pseudo-Distributed Operation section there was no problem but
localhost:50070 and localhost:50030 couldn't be opened. It says "localhost
reffused the connection". I tried this in another machine, but it says
like "Http Error 404: /dfshealth.jsp ...". How can I see these pages and
continue using hadoop? Thanks.

Additional Information: 

OS: Debian 5.0 (latest version)
JDK: Sun-Java 1.6 (latest version)
 rsync and ssh installed
edited hadoop-site-xml properly

Halil İbrahim ÇAKIR

Dumlupınar Üniversitesi Bilgisayar Mühendisliği 

http://cakirhal.blogspot.com