Your issue was probably that slave_hadoop and master_hadoop are not valid
host names:

RFCs <http://en.wikipedia.org/wiki/Request_for_Comments> mandate that a
hostname's labels may contain only the
ASCII<http://en.wikipedia.org/wiki/ASCII>letters 'a' through 'z'
(case-insensitive), the digits '0' through '9', and
the hyphen. Hostname labels cannot begin or end with a hyphen. No other
symbols, punctuation characters, or blank spaces are permitted.

from http://en.wikipedia.org/wiki/Hostname

-Todd

On Tue, Oct 13, 2009 at 10:01 AM, Tejas Lagvankar <t...@umbc.edu> wrote:

> Hey Kevin,
>
> You were right...
> I changed all my aliases to IP addresses. It worked !
>
> Thank you all again :)
>
> Regards,
> Tejas
>
>
> On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
>
>  By name resolution, I assume that you mean the name mentioned in
>> /etc/hosts.  Yes, in the logs, the IP address appears in the beginning.
>> Correct me if I'm wrong
>> I will also try with using just the IP's instead of the aliases.
>>
>> On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
>>
>>  did you verify the name resolution?
>>>
>>> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <t...@umbc.edu> wrote:
>>>
>>> I get the same error even if i specify the port number. I have tried with
>>> port numbers 54310 as well as 9000.
>>>
>>>
>>> Regards,
>>> Tejas
>>>
>>>
>>> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>>>
>>> I think you need to specify the port as well for following port
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <t...@umbc.edu> wrote:
>>>
>>> Hi,
>>>
>>>
>>> We are trying to set up a cluster (starting with 2 machines) using the
>>> new
>>> 0.20.1 version.
>>>
>>> On the master machine, just after the server starts, the name node dies
>>> off
>>> with the following exception:
>>>
>>> 2009-10-13 01:22:24,740 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>>>    at
>>>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>>>    at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
>>>    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>>    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
>>>    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>>>    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>>    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>>>    at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>>>    at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>>>    at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>>    at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>>    at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>
>>> Can anyone help ?  Also can anyone send across example configuration
>>> files
>>> for 0.20.1 if they are different than we are using ?
>>>
>>> The detail log file is attached along with.
>>>
>>>
>>>
>>>
>>> The configuration files are as follows:
>>>
>>> MASTER CONFIG
>>> ------ conf/masters -------
>>> master_hadoop
>>>
>>> ------ conf/slaves -------
>>> master_hadoop
>>> slave_hadoop
>>>
>>> ------ core-site.xml -------
>>> <configuration>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp</value>
>>> </property>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>>
>>>
>>> SLAVE CONFIG
>>> ------ core-site.xml -------
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp/</value>
>>> </property>
>>>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Tejas Lagvankar
>>> meette...@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2> <
>>> http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Chandan Tamrakar
>>>
>>> Tejas Lagvankar
>>> meette...@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>> Tejas Lagvankar
>> meette...@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
> Tejas Lagvankar
> meette...@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>

Reply via email to