Re: Error: INFO ipc.Client: Retrying connect to server: /192.168.100.11:8020. Already tried 0 time(s).

2012-11-02 Thread shriparv
This just tells that the service which you are trying to conned is not
running on that address or port no.

just try netstat -nlrt | grep 8020 and check if it showing something. what i
can find out from the error that the server is not running or if it running
then its running on some other port



--
View this message in context: 
http://hadoop.6.n7.nabble.com/Error-INFO-ipc-Client-Retrying-connect-to-server-192-168-100-11-8020-Already-tried-0-time-s-tp11885p66957.html
Sent from the common-user mailing list archive at Nabble.com.


Retrying connect to server: localhost/127.0.0.1:9000.

2012-07-27 Thread Keith Wiley
I'm plagued with this error:
Retrying connect to server: localhost/127.0.0.1:9000.

I'm trying to set up hadoop on a new machine, just a basic pseudo-distributed 
setup.  I've done this quite a few times on other machines, but this time I'm 
kinda stuck.  I formatted the namenode without obvious errors and ran 
start-all.sh with no errors to stdout.  However, the logs are full of that 
error above and if I attempt to access hdfs (ala hadoop fs -ls /) I get that 
error again.  Obviously, my core-site.xml sets fs.default.name to 
hdfs://localhost:9000.

I assume something is wrong with /etc/hosts, but I'm not sure how to fix it.  
If hostname returns X and hostname -f returns Y, then what are the 
corresponding entries in /etc/hosts?

Thanks for any help.


Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com

I used to be with it, but then they changed what it was.  Now, what I'm with
isn't it, and what's it seems weird and scary to me.
   --  Abe (Grandpa) Simpson




Re: Retrying connect to server: localhost/127.0.0.1:9000.

2012-07-27 Thread anil gupta
Hi Keith,

Does ping to localhost returns a reply? Try telneting to localhost 9000.

Thanks,
Anil

On Fri, Jul 27, 2012 at 11:22 AM, Keith Wiley kwi...@keithwiley.com wrote:

 I'm plagued with this error:
 Retrying connect to server: localhost/127.0.0.1:9000.

 I'm trying to set up hadoop on a new machine, just a basic
 pseudo-distributed setup.  I've done this quite a few times on other
 machines, but this time I'm kinda stuck.  I formatted the namenode without
 obvious errors and ran start-all.sh with no errors to stdout.  However, the
 logs are full of that error above and if I attempt to access hdfs (ala
 hadoop fs -ls /) I get that error again.  Obviously, my core-site.xml
 sets fs.default.name to hdfs://localhost:9000.

 I assume something is wrong with /etc/hosts, but I'm not sure how to fix
 it.  If hostname returns X and hostname -f returns Y, then what are the
 corresponding entries in /etc/hosts?

 Thanks for any help.


 
 Keith Wiley kwi...@keithwiley.com keithwiley.com
 music.keithwiley.com

 I used to be with it, but then they changed what it was.  Now, what I'm
 with
 isn't it, and what's it seems weird and scary to me.
--  Abe (Grandpa) Simpson

 




-- 
Thanks  Regards,
Anil Gupta


Re: Retrying connect to server: localhost/127.0.0.1:9000.

2012-07-27 Thread Bejoy KS
Hi Keith

Your NameNode is not up still. What does the NN logs say?

Regards
Bejoy KS

Sent from handheld, please excuse typos.

-Original Message-
From: anil gupta anilgupt...@gmail.com
Date: Fri, 27 Jul 2012 11:30:57 
To: common-user@hadoop.apache.org
Reply-To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server: localhost/127.0.0.1:9000.

Hi Keith,

Does ping to localhost returns a reply? Try telneting to localhost 9000.

Thanks,
Anil

On Fri, Jul 27, 2012 at 11:22 AM, Keith Wiley kwi...@keithwiley.com wrote:

 I'm plagued with this error:
 Retrying connect to server: localhost/127.0.0.1:9000.

 I'm trying to set up hadoop on a new machine, just a basic
 pseudo-distributed setup.  I've done this quite a few times on other
 machines, but this time I'm kinda stuck.  I formatted the namenode without
 obvious errors and ran start-all.sh with no errors to stdout.  However, the
 logs are full of that error above and if I attempt to access hdfs (ala
 hadoop fs -ls /) I get that error again.  Obviously, my core-site.xml
 sets fs.default.name to hdfs://localhost:9000.

 I assume something is wrong with /etc/hosts, but I'm not sure how to fix
 it.  If hostname returns X and hostname -f returns Y, then what are the
 corresponding entries in /etc/hosts?

 Thanks for any help.


 
 Keith Wiley kwi...@keithwiley.com keithwiley.com
 music.keithwiley.com

 I used to be with it, but then they changed what it was.  Now, what I'm
 with
 isn't it, and what's it seems weird and scary to me.
--  Abe (Grandpa) Simpson

 




-- 
Thanks  Regards,
Anil Gupta



Re: Retrying connect to server: localhost/127.0.0.1:9000.

2012-07-27 Thread Keith Wiley
I got it.  The hadoop installation had been done by root (I can't claim credit 
for that thankfully), and when I chowned everything to my account, I missed a 
few directories.  Filling in those blanks made it start working.

On Jul 27, 2012, at 11:30 , anil gupta wrote:

 Hi Keith,
 
 Does ping to localhost returns a reply? Try telneting to localhost 9000.
 
 Thanks,
 Anil
 
 On Fri, Jul 27, 2012 at 11:22 AM, Keith Wiley kwi...@keithwiley.com wrote:
 
 I'm plagued with this error:
 Retrying connect to server: localhost/127.0.0.1:9000.
 
 I'm trying to set up hadoop on a new machine, just a basic
 pseudo-distributed setup.  I've done this quite a few times on other
 machines, but this time I'm kinda stuck.  I formatted the namenode without
 obvious errors and ran start-all.sh with no errors to stdout.  However, the
 logs are full of that error above and if I attempt to access hdfs (ala
 hadoop fs -ls /) I get that error again.  Obviously, my core-site.xml
 sets fs.default.name to hdfs://localhost:9000.
 
 I assume something is wrong with /etc/hosts, but I'm not sure how to fix
 it.  If hostname returns X and hostname -f returns Y, then what are the
 corresponding entries in /etc/hosts?
 
 Thanks for any help.
 



Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com

What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio than
when I entered.
   --  Keith Wiley




Re: Retrying connect to server error while configuring hadoop

2011-04-12 Thread praveen.peddi
.
 2011-04-08 15:47:47,839 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 0 time(s).
 2011-04-08 15:47:48,849 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 1 time(s).
 2011-04-08 15:47:49,859 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 2 time(s).
 2011-04-08 15:47:50,869 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 3 time(s).
 2011-04-08 15:47:51,878 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 4 time(s).
 2011-04-08 15:47:52,889 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 5 time(s).
 2011-04-08 15:47:53,900 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 6 time(s).
 2011-04-08 15:47:54,908 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 7 time(s).
 2011-04-08 15:47:55,917 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 8 time(s).
 2011-04-08 15:47:56,926 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 9 time(s).
 2011-04-08 15:47:56,928 INFO org.apache.hadoop.ipc.RPC: Server at
 hadoop1/192.168.161.198:8020 not available yet, Z...
 2011-04-08 15:47:58,944 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 0 time(s).
 2011-04-08 15:47:59,953 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 1 time(s).
 2011-04-08 15:48:00,961 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: hadoop1/192.168.161.198:8020. Already tried 2 time(s).
 
 =
 
 Can anyone please help me to understand the problem. 
 
 Thanks in advance.
 
 -- 
 View this message in context: 
 http://old.nabble.com/%22Retrying-connect-to-server%22-error-while-configuring-hadoop-tp31376269p31376269.html
 Sent from the Hadoop core-user mailing list archive at Nabble.com.
 


Retrying connect to server error while configuring hadoop

2011-04-11 Thread prasunb

Hello, 

I am trying to configure Hadoop in fully distributed mode on three virtual
Fedora machines. During configuring I am not getting any error. Even when I
am executing the script start-dfs.sh, there aren't any error. 

But practically the namenode isn't able to connect the datanodes. These are
the error snippents from the hadoop-root-datanode-hadoop2.log files of
both datanodes 

== 

STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = hadoop2/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-CDH3B4
STARTUP_MSG:   build =  -r 3aa7c91592ea1c53f3a913a581dbfcdfebe98bfe;
compiled by 'root' on Mon Feb 21 17:31:12 EST 2011
/
2011-04-08 15:33:03,537 WARN org.apache.hadoop.util.NativeCodeLoader: Unable
to load native-hadoop library for your platform... using builtin-java
classes where applicable
2011-04-08 15:33:03,549 INFO
org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
set up for Hadoop, not re-installing.
2011-04-08 15:33:03,691 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
to hadoop1/192.168.161.198:8020 failed on local exception:
java.io.IOException: Connection reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:342)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:317)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:297)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:338)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:280)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1527)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1467)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1485)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1610)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1620)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:202)
at sun.nio.ch.IOUtil.read(IOUtil.java:175)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:375)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:812)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)

2011-04-08 15:33:03,692 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at hadoop2/127.0.0.1
/
2011-04-08 15:47:46,416 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = hadoop2/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-CDH3B4
STARTUP_MSG:   build =  -r 3aa7c91592ea1c53f3a913a581dbfcdfebe98bfe;
compiled by 'root' on Mon Feb 21 17:31:12 EST 2011
/
2011-04-08 15:47:46,738 INFO
org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
set up for Hadoop, not re-installing.
2011-04-08 15:47:47,839 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: hadoop1/192.168.161.198:8020. Already tried 0 time(s).
2011-04-08 15:47:48,849 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: hadoop1/192.168.161.198:8020. Already tried 1 time(s).
2011-04-08 15:47:49,859 INFO org.apache.hadoop.ipc.Client

RE: Retrying connect to server

2010-12-31 Thread Cavus,M.,Fa. Post Direkt
Hi,
I do get this:
$ jps
6017 DataNode
5805 NameNode
6234 SecondaryNameNode
6354 Jps

What can I do to start JobTracker?

Here my config Files:
$ cat mapred-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?

!-- Put site-specific property overrides in this file. --

configuration
property
namemapred.job.tracker/name
valuelocalhost:9001/value
descriptionThe host and port that the MapReduce job tracker runs
at./description
/property
 /configuration


cat hdfs-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?

!-- Put site-specific property overrides in this file. --

configuration
property
namedfs.replication/name
value1/value
descriptionThe actual number of replications can be specified when the
file is created./description
/property

/configuration

$ cat core-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?

!-- Put site-specific property overrides in this file. --

configuration
property
namefs.default.name/name
valuehdfs://localhost:9000/value
descriptionThe name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation.
/description
/property
/configuration

-Original Message-
From: James Seigel [mailto:ja...@tynt.com] 
Sent: Friday, December 31, 2010 4:56 AM
To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server

Or
3) The configuration (or lack thereof) on the machine you are trying to 
run this, has no idea where your DFS or JobTracker  is :)

Cheers
James.

On 2010-12-30, at 8:53 PM, Adarsh Sharma wrote:

 Cavus,M.,Fa. Post Direkt wrote:
 I process this
 
 ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount 
 gutenberg gutenberg-output
 
 I get this
 Dıd anyone know why I get this Error?
 
 10/12/30 16:48:59 INFO security.Groups: Group mapping 
 impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
 cacheTimeout=30
 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 0 time(s).
 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 1 time(s).
 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 2 time(s).
 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 3 time(s).
 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 4 time(s).
 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 5 time(s).
 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 6 time(s).
 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 7 time(s).
 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 8 time(s).
 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 9 time(s).
 Exception in thread main java.net.ConnectException: Call to 
 localhost/127.0.0.1:9001 failed on connection exception: 
 java.net.ConnectException: Connection refused
  at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
  at org.apache.hadoop.ipc.Client.call(Client.java:908)
  at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
  at $Proxy0.getProtocolVersion(Unknown Source)
  at 
 org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
  at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
  at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
  at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
  at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
  at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
  at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
 Caused by: java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
  at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
  at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
  at org.apache.hadoop.ipc.Client$Connection.access

RE: Retrying connect to server

2010-12-31 Thread Cavus,M.,Fa. Post Direkt
Hi,
I've forgotten to start start-mapred.sh

Thanks All

-Original Message-
From: Cavus,M.,Fa. Post Direkt [mailto:m.ca...@postdirekt.de] 
Sent: Friday, December 31, 2010 10:20 AM
To: common-user@hadoop.apache.org
Subject: RE: Retrying connect to server

Hi,
I do get this:
$ jps
6017 DataNode
5805 NameNode
6234 SecondaryNameNode
6354 Jps

What can I do to start JobTracker?

Here my config Files:
$ cat mapred-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?

!-- Put site-specific property overrides in this file. --

configuration
property
namemapred.job.tracker/name
valuelocalhost:9001/value
descriptionThe host and port that the MapReduce job tracker runs
at./description
/property
 /configuration


cat hdfs-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?

!-- Put site-specific property overrides in this file. --

configuration
property
namedfs.replication/name
value1/value
descriptionThe actual number of replications can be specified when the
file is created./description
/property

/configuration

$ cat core-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?

!-- Put site-specific property overrides in this file. --

configuration
property
namefs.default.name/name
valuehdfs://localhost:9000/value
descriptionThe name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation.
/description
/property
/configuration

-Original Message-
From: James Seigel [mailto:ja...@tynt.com] 
Sent: Friday, December 31, 2010 4:56 AM
To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server

Or
3) The configuration (or lack thereof) on the machine you are trying to 
run this, has no idea where your DFS or JobTracker  is :)

Cheers
James.

On 2010-12-30, at 8:53 PM, Adarsh Sharma wrote:

 Cavus,M.,Fa. Post Direkt wrote:
 I process this
 
 ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount 
 gutenberg gutenberg-output
 
 I get this
 Dıd anyone know why I get this Error?
 
 10/12/30 16:48:59 INFO security.Groups: Group mapping 
 impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
 cacheTimeout=30
 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 0 time(s).
 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 1 time(s).
 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 2 time(s).
 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 3 time(s).
 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 4 time(s).
 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 5 time(s).
 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 6 time(s).
 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 7 time(s).
 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 8 time(s).
 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 9 time(s).
 Exception in thread main java.net.ConnectException: Call to 
 localhost/127.0.0.1:9001 failed on connection exception: 
 java.net.ConnectException: Connection refused
  at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
  at org.apache.hadoop.ipc.Client.call(Client.java:908)
  at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
  at $Proxy0.getProtocolVersion(Unknown Source)
  at 
 org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
  at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
  at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
  at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
  at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
  at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
  at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
 Caused by: java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574

Retrying connect to server

2010-12-30 Thread Cavus,M.,Fa. Post Direkt
I process this

./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg 
gutenberg-output

I get this
Dıd anyone know why I get this Error?

10/12/30 16:48:59 INFO security.Groups: Group mapping 
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=30
10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 0 time(s).
10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 1 time(s).
10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 2 time(s).
10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 3 time(s).
10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 4 time(s).
10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 5 time(s).
10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 6 time(s).
10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 7 time(s).
10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 8 time(s).
10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 9 time(s).
Exception in thread main java.net.ConnectException: Call to 
localhost/127.0.0.1:9001 failed on connection exception: 
java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
at org.apache.hadoop.ipc.Client.call(Client.java:908)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.getProtocolVersion(Unknown Source)
at 
org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
at org.apache.hadoop.ipc.Client.call(Client.java:885)
... 15 more


Re: Retrying connect to server

2010-12-30 Thread Esteban Gutierrez Moguel
Hello Cavus,

is your Job Tracker running on localhost? It would be great if you can
provide more information about your current Hadoop setup.

cheers,
esteban.


estebangutierrez.com — twitter.com/esteban


2010/12/30 Cavus,M.,Fa. Post Direkt m.ca...@postdirekt.de

 I process this

 ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount
 gutenberg gutenberg-output

 I get this
 Dıd anyone know why I get this Error?

 10/12/30 16:48:59 INFO security.Groups: Group mapping
 impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
 cacheTimeout=30
 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 0 time(s).
 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 1 time(s).
 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 2 time(s).
 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 3 time(s).
 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 4 time(s).
 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 5 time(s).
 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 6 time(s).
 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 7 time(s).
 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 8 time(s).
 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 9 time(s).
 Exception in thread main java.net.ConnectException: Call to localhost/
 127.0.0.1:9001 failed on connection exception: java.net.ConnectException:
 Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
at org.apache.hadoop.ipc.Client.call(Client.java:908)
at
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.getProtocolVersion(Unknown Source)
at
 org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
at
 org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
 Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
at
 org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
at org.apache.hadoop.ipc.Client.call(Client.java:885)
... 15 more



Re: Retrying connect to server

2010-12-30 Thread maha
Hi Cavus,

   Please check that hadoop JobTracker and other daemons are running by typing 
jps. If you see one of (JobTracker,TaskTracker,namenode,datanode) missing 
then you need to 'stop-all' then format the namenode and start-all again. 

 Maha

On Dec 30, 2010, at 7:52 AM, Cavus,M.,Fa. Post Direkt wrote:

 I process this
 
 ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg 
 gutenberg-output
 
 I get this
 Dıd anyone know why I get this Error?
 
 10/12/30 16:48:59 INFO security.Groups: Group mapping 
 impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
 cacheTimeout=30
 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 0 time(s).
 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 1 time(s).
 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 2 time(s).
 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 3 time(s).
 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 4 time(s).
 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 5 time(s).
 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 6 time(s).
 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 7 time(s).
 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 8 time(s).
 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 9 time(s).
 Exception in thread main java.net.ConnectException: Call to 
 localhost/127.0.0.1:9001 failed on connection exception: 
 java.net.ConnectException: Connection refused
   at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
   at org.apache.hadoop.ipc.Client.call(Client.java:908)
   at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
   at $Proxy0.getProtocolVersion(Unknown Source)
   at 
 org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
   at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
   at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
   at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
   at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
   at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
   at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
   at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
 Caused by: java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
   at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
   at org.apache.hadoop.ipc.Client.call(Client.java:885)
   ... 15 more



Re: Retrying connect to server

2010-12-30 Thread li ping
make sure your /etc/hosts file contains the correct ip/hostname pair. This
is very important

2010/12/30 Cavus,M.,Fa. Post Direkt m.ca...@postdirekt.de

 I process this

 ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount
 gutenberg gutenberg-output

 I get this
 Dıd anyone know why I get this Error?

 10/12/30 16:48:59 INFO security.Groups: Group mapping
 impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
 cacheTimeout=30
 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 0 time(s).
 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 1 time(s).
 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 2 time(s).
 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 3 time(s).
 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 4 time(s).
 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 5 time(s).
 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 6 time(s).
 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 7 time(s).
 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 8 time(s).
 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:9001. Already tried 9 time(s).
 Exception in thread main java.net.ConnectException: Call to localhost/
 127.0.0.1:9001 failed on connection exception: java.net.ConnectException:
 Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
at org.apache.hadoop.ipc.Client.call(Client.java:908)
at
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.getProtocolVersion(Unknown Source)
at
 org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
at
 org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
 Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
at
 org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
at org.apache.hadoop.ipc.Client.call(Client.java:885)
... 15 more




-- 
-李平


Re: Retrying connect to server

2010-12-30 Thread Adarsh Sharma

Cavus,M.,Fa. Post Direkt wrote:

I process this

./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg 
gutenberg-output

I get this
Dıd anyone know why I get this Error?

10/12/30 16:48:59 INFO security.Groups: Group mapping 
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=30
10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 0 time(s).
10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 1 time(s).
10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 2 time(s).
10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 3 time(s).
10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 4 time(s).
10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 5 time(s).
10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 6 time(s).
10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 7 time(s).
10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 8 time(s).
10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9001. Already tried 9 time(s).
Exception in thread main java.net.ConnectException: Call to 
localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: 
Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
at org.apache.hadoop.ipc.Client.call(Client.java:908)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.getProtocolVersion(Unknown Source)
at 
org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
at org.apache.hadoop.ipc.Client.call(Client.java:885)
... 15 more
  

This is the most common issue occured after configuring Hadoop Cluster.

Reason :

1. Your NameNode, JobTracker is not running. Verify through Web UI and 
jps commands.
2. DNS Resolution. You must have IP hostname enteries if all nodes in 
/etc/hosts file.




Best Regards

Adarsh Sharma


Re: Retrying connect to server

2010-12-30 Thread James Seigel
Or
3) The configuration (or lack thereof) on the machine you are trying to 
run this, has no idea where your DFS or JobTracker  is :)

Cheers
James.

On 2010-12-30, at 8:53 PM, Adarsh Sharma wrote:

 Cavus,M.,Fa. Post Direkt wrote:
 I process this
 
 ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount 
 gutenberg gutenberg-output
 
 I get this
 Dıd anyone know why I get this Error?
 
 10/12/30 16:48:59 INFO security.Groups: Group mapping 
 impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
 cacheTimeout=30
 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 0 time(s).
 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 1 time(s).
 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 2 time(s).
 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 3 time(s).
 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 4 time(s).
 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 5 time(s).
 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 6 time(s).
 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 7 time(s).
 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 8 time(s).
 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: 
 localhost/127.0.0.1:9001. Already tried 9 time(s).
 Exception in thread main java.net.ConnectException: Call to 
 localhost/127.0.0.1:9001 failed on connection exception: 
 java.net.ConnectException: Connection refused
  at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
  at org.apache.hadoop.ipc.Client.call(Client.java:908)
  at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
  at $Proxy0.getProtocolVersion(Unknown Source)
  at 
 org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
  at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
  at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
  at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:70)
  at org.apache.hadoop.mapreduce.Job.init(Job.java:129)
  at org.apache.hadoop.mapreduce.Job.init(Job.java:134)
  at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
 Caused by: java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
  at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
  at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
  at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
  at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
  at org.apache.hadoop.ipc.Client.call(Client.java:885)
  ... 15 more
  
 This is the most common issue occured after configuring Hadoop Cluster.
 
 Reason :
 
 1. Your NameNode, JobTracker is not running. Verify through Web UI and jps 
 commands.
 2. DNS Resolution. You must have IP hostname enteries if all nodes in 
 /etc/hosts file.
 
 
 
 Best Regards
 
 Adarsh Sharma



Error: INFO ipc.Client: Retrying connect to server: /192.168.100.11:8020. Already tried 0 time(s).

2009-10-08 Thread santosh gandham
Hi,
  I am new to Hadoop. I just configured it based on the documentation. While
I was running example program wordcount.java, I am getting errors. When I
gave command
 $  /bin/hadoop dfs -mkdir santhosh  , I am getting error as

 09/10/08 13:30:12 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 0 time(s).
09/10/08 13:30:13 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 1 time(s).
09/10/08 13:30:14 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 2 time(s).
09/10/08 13:30:15 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 3 time(s).
09/10/08 13:30:16 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 4 time(s).
09/10/08 13:30:17 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 5 time(s).
09/10/08 13:30:18 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 6 time(s).
09/10/08 13:30:19 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 7 time(s).
09/10/08 13:30:20 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 8 time(s).
09/10/08 13:30:21 INFO ipc.Client: Retrying connect to server: /
192.168.100.11:8020. Already tried 9 time(s).
Bad connection to FS. command aborted.

I can able to ssh to the server without any password. And I didnt get any
error while I was formating HDFS using the command  $ bin/hadoop namenode
-format .
Please help me . what should I do now?. Thank you.



-- 
Gandham Santhosh


Re: Error: INFO ipc.Client: Retrying connect to server: /192.168.100.11:8020. Already tried 0 time(s).

2009-10-08 Thread .ke. sivakumar
Hi Santosh,

Check whether all the datanodes are up and running, using
the command
'bin/hadoop dfsadmin -report'.




On Thu, Oct 8, 2009 at 4:24 AM, santosh gandham santhosh...@gmail.comwrote:

 Hi,
  I am new to Hadoop. I just configured it based on the documentation. While
 I was running example program wordcount.java, I am getting errors. When I
 gave command
  $  /bin/hadoop dfs -mkdir santhosh  , I am getting error as

  09/10/08 13:30:12 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 0 time(s).
 09/10/08 13:30:13 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 1 time(s).
 09/10/08 13:30:14 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 2 time(s).
 09/10/08 13:30:15 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 3 time(s).
 09/10/08 13:30:16 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 4 time(s).
 09/10/08 13:30:17 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 5 time(s).
 09/10/08 13:30:18 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 6 time(s).
 09/10/08 13:30:19 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 7 time(s).
 09/10/08 13:30:20 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 8 time(s).
 09/10/08 13:30:21 INFO ipc.Client: Retrying connect to server: /
 192.168.100.11:8020. Already tried 9 time(s).
 Bad connection to FS. command aborted.

 I can able to ssh to the server without any password. And I didnt get any
 error while I was formating HDFS using the command  $ bin/hadoop namenode
 -format .
 Please help me . what should I do now?. Thank you.



 --
 Gandham Santhosh