Yes all versions are the same. I installed hadoop on master and copied the 
folder to other machines so the install should be identical.

BTW I resolved the below issue by making all slaves visible to the client 
machine. Previously only master was visible to the client.

However I have another problem when running the job. For some reason the slave 
host names are resolved to different names than the ones I gave in hadoop 
config.

Each vm has two nic cards. I configured eth1 as hslave1, hslave2.... eth0 is 
not used by hadoop at all but when running the job it is somehow resolving 
slave name as machine name instead of "hslave1" and that is causing 
communication issues between slaves I think. I even specified property 
"mapred.tasktracker.dns.interface" to "eth1" but that didn't help.

I saw that one can also specifiy slave.host.name property but I am trying to 
avoid that since that would change the hadoop install on each slave.

Any thoughts to resolve the issue would be appreciated...

Praveen
________________________________________
From: ext icebergs [hkm...@gmail.com]
Sent: Saturday, March 05, 2011 9:24 PM
To: common-user@hadoop.apache.org
Subject: Re: Unable to use hadoop cluster on the cloud

Are you sure that all the version of hadoop are the same?

2011/3/4 <praveen.pe...@nokia.com>

> Thanks Adarsh for the reply.
>
> Just to clarify the issue a bit, I am able to do all operations
> (-copyFromLocal, -get -rmr etc) from the master node. So I am confident that
> the communication between all hadoop machines is fine. But when I do the
> same operation from another machine that also has same hadoop config, I get
> below errors. However I can do -lsr and it lists the files correctly.
>
> Praveen
>
> -----Original Message-----
> From: ext Adarsh Sharma [mailto:adarsh.sha...@orkash.com]
> Sent: Friday, March 04, 2011 12:12 AM
> To: common-user@hadoop.apache.org
> Subject: Re: Unable to use hadoop cluster on the cloud
>
> Hi Praveen, Check through ssh & ping that your datanodes are communicating
> with each other or not.
>
> Cheers, Adarsh
> praveen.pe...@nokia.com wrote:
> > Hello all,
> > I installed hadoop0.20.2 on physical machines and everything works like a
> charm. Now I installed hadoop using the same hadoop-install gz file on the
> cloud. Installation seems fine. I can even copy files to hdfs from master
> machine. But when I try to do it from another "non hadoop" machine, I get
> following error. I did googling and lot of people got this error but could
> not find any solution.
> >
> > Also I didn't see any exceptions in the hadoop logs.
> >
> > Any thoughts?
> >
> > $ /usr/local/hadoop-0.20.2/bin/hadoop fs -copyFromLocal
> > Merchandising-ear.tar.gz /tmp/hadoop-test/Merchandising-ear.tar.gz
> > 11/03/03 21:58:50 INFO hdfs.DFSClient: Exception in
> > createBlockOutputStream java.net.ConnectException: Connection timed
> > out
> > 11/03/03 21:58:50 INFO hdfs.DFSClient: Abandoning block
> > blk_-8243207628973732008_1005
> > 11/03/03 21:58:50 INFO hdfs.DFSClient: Waiting to find target node:
> > xx.xx.12:50010
> > 11/03/03 21:59:17 INFO hdfs.DFSClient: Exception in
> > createBlockOutputStream java.net.ConnectException: Connection timed
> > out
> > 11/03/03 21:59:17 INFO hdfs.DFSClient: Abandoning block
> > blk_2852127666568026830_1005
> > 11/03/03 21:59:17 INFO hdfs.DFSClient: Waiting to find target node:
> > xx.xx.16.12:50010
> > 11/03/03 21:59:44 INFO hdfs.DFSClient: Exception in
> > createBlockOutputStream java.net.ConnectException: Connection timed
> > out
> > 11/03/03 21:59:44 INFO hdfs.DFSClient: Abandoning block
> > blk_2284836193463265901_1005
> > 11/03/03 21:59:44 INFO hdfs.DFSClient: Waiting to find target node:
> > xx.xx.16.12:50010
> > 11/03/03 22:00:11 INFO hdfs.DFSClient: Exception in
> > createBlockOutputStream java.net.ConnectException: Connection timed
> > out
> > 11/03/03 22:00:11 INFO hdfs.DFSClient: Abandoning block
> > blk_-5600915414055250488_1005
> > 11/03/03 22:00:11 INFO hdfs.DFSClient: Waiting to find target node:
> > xx.xx.16.11:50010
> > 11/03/03 22:00:17 WARN hdfs.DFSClient: DataStreamer Exception:
> java.io.IOException: Unable to create new block.
> >         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
> >         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
> >         at
> > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSC
> > lient.java:2288)
> >
> > 11/03/03 22:00:17 WARN hdfs.DFSClient: Error Recovery for block
> > blk_-5600915414055250488_1005 bad datanode[0] nodes == null
> > 11/03/03 22:00:17 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/tmp/hadoop-test/Merchandising-ear.tar.gz" - Aborting...
> > copyFromLocal: Connection timed out
> > 11/03/03 22:00:17 ERROR hdfs.DFSClient: Exception closing file
> > /tmp/hadoop-test/Merchandising-ear.tar.gz : java.net.ConnectException:
> > Connection timed out
> > java.net.ConnectException: Connection timed out
> >         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >         at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >         at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> >         at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> >         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2870)
> >         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
> >         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
> >         at
> > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSC
> > lient.java:2288)
> > [C4554954_admin@c4554954vl03 relevancy]$
> >
> >
> >
>
>

Reply via email to