Dear Robin,

Thanks for your valuable time and response. please find the attached
namenode logs and configurations files.

I am using 2 ubuntu boxes.One as master & slave and other as slave.
below given is the environment set-up in both the machines.

:
Hadoop : hadoop_0.20.2
Linux: Ubuntu Linux 10.10(master) and Ubuntu Linux 11.04(Slave)
Java: java-7-oracle
JAVA_HOME and HADOOP_HOME configuration is done in .bashrc file.

Both the machines are in LAN and able to ping each other. IP address's
of both the machines are configured in /etc/hosts.

I do have SSH access to both master and slave as well.

please let me know if you need any other information.

Thanks in advance.

Regards,
Guruprasad






On Thu, Feb 9, 2012 at 1:06 AM, Robin Mueller-Bady <
robin.mueller-b...@oracle.com> wrote:

>  Dear Guruprasad,
>
> it would be very helpful to provide details from your configuration files
> as well as more details on your setup.
> It seems to be that the connection from slave to master cannot be
> established ("Connection reset by peer").
> Do you use a virtual environment, physical master/slaves or all on one
> machine ?
> Please paste also the output of "kingul2" namenode logs.
>
> Regards,
>
> Robin
>
>
> On 02/08/12 13:06, Guruprasad B wrote:
>
> Hi,
>
> I am Guruprasad from Bangalore (India). I need help in setting up hadoop
> platform. I am very much new to Hadoop Platform.
>
> I am following the below given articles and I was able to set up
> "Single-Node Cluster"
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#what-we-want-to-do
>
> Now I am trying to set up " 
> <http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#what-we-want-to-doNowIamtryingtosetup>Multi-Node
>  Cluster" by following the below given
> article.http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
>
> Below given is my setup:
> Hadoop : hadoop_0.20.2
> Linux: Ubuntu Linux 10.10
> Java: java-7-oracle
>
>
> I have successfully reached till the topic "Starting the multi-node
> cluster" in the above given article.
> When I start the HDFS/MapReduce daemons it is getting started and going
> down immediately both in master & slave as well,
> please have a look at the below logs,
>
> hduser@kinigul2:/usr/local/hadoop$ bin/start-dfs.sh
> starting namenode, logging to
> /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-kinigul2.out
> master: starting datanode, logging to
> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-kinigul2.out
> slave: starting datanode, logging to
> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-guruL.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-kinigul2.out
>
> hduser@kinigul2:/usr/local/hadoop$ jps
> 6098 DataNode
> 6328 Jps
> 5914 NameNode
> 6276 SecondaryNameNode
>
> hduser@kinigul2:/usr/local/hadoop$ jps
> 6350 Jps
>
>
> I am getting below given error in slave logs:
>
> 2012-02-08 21:04:01,641 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
> to master/16.150.98.62:54310 failed on local exception:
> java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>     at org.apache.hadoop.ipc.Client.call(Client.java:743)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>     at $Proxy4.getProtocolVersion(Unknown Source)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> Caused by: java.io.IOException: Connection reset by peer
>     at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>     at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>     at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>     at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>     at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>     at
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>     at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>     at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>     at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>     at java.io.FilterInputStream.read(FilterInputStream.java:133)
>     at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
>     at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>     at java.io.DataInputStream.readInt(DataInputStream.java:387)
>     at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>     at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>
>
> Can you please tell what could be the reason behind this or point me to
> some pointers?
>
> Regards,
> Guruprasad
>
>
>
> --
> [image: Oracle] <http://www.oracle.com>
> Robin Müller-Bady | Sales Consultant
> Phone: +49 211 74839 701 <+49%20211%2074839%20701> | Mobile: +49 172
> 8438346 <+49%20172%208438346>
> Oracle STCC Fusion Middleware
>
> ORACLE Deutschland B.V. & Co. KG | Hamborner Strasse 51 | 40472 Düsseldorf
>
> ORACLE Deutschland B.V. & Co. KG
> Hauptverwaltung: Riesstr. 25, D-80992 München
> Registergericht: Amtsgericht München, HRA 95603
> Geschäftsführer: Jürgen Kunz
>
> Komplementärin: ORACLE Deutschland Verwaltung B.V.
> Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
> Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
> Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher
>
>   [image: Green Oracle] <http://www.oracle.com/commitment> Oracle is
> committed to developing practices and products that help protect the
> environment
>

<<green-for-email-sig_0.gif>>

<<oracle_sig_logo.gif>>

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- In: conf/core-site.xml -->
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- In: conf/hdfs-site.xml -->
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- In: conf/mapred-site.xml -->
<property>
  <name>mapred.job.tracker</name>
  <value>master:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>

Attachment: hadoop-env.sh
Description: Bourne shell script

Reply via email to