Re: Datanode Setup

2009-04-27 Thread jpe30

bump*

Any suggestions?

-- 
View this message in context: 
http://www.nabble.com/Datanode-Setup-tp23064660p23259364.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Datanode Setup

2009-04-23 Thread jpe30

Right now I'm just trying to get one node running.  Once its running I'll
copy it over.



jason hadoop wrote:
> 
> Have you copied the updated hadoop-site.xml file to the conf directory on
> all of your slave nodes?
> 
> 
> On Thu, Apr 23, 2009 at 2:10 PM, jpe30  wrote:
> 
>>
>> Ok, I've done all of this.  Set up my hosts file in Linux, setup my
>> master
>> and slaves file in Hadoop and setup my hadoop-site.xml.  It still does
>> not
>> work.  The datanode still gives me this error...
>>
>> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
>>
>> ...which makes me think its not reading the hadoop-site.xml file at all.
>> I've checked the permissions and the user has full permissions to all
>> files
>> within the Hadoop directory.  Any suggestions?
>>
>>
>>
>> Mithila Nagendra wrote:
>> >
>> > You should have conf/slaves file on the master node set to master,
>> node01,
>> > node02. so on and the masters file on master set to master. Also in
>> > the
>> > /etc/hosts file get rid of 'node6' in the line 127.0.0.1
>> > localhost.localdomain   localhost node6 on all your nodes. Ensure that
>> the
>> > /etc/hosts file contain the same information on all nodes. Also
>> > hadoop-site.xml files on all nodes should have master:portno for hdfs
>> and
>> > tasktracker.
>> > Once you do this restart hadoop.
>> >
>> > On Fri, Apr 17, 2009 at 10:04 AM, jpe30  wrote:
>> >
>> >>
>> >>
>> >>
>> >> Mithila Nagendra wrote:
>> >> >
>> >> > You have to make sure that you can ssh between the nodes. Also check
>> >> the
>> >> > file hosts in /etc folder. Both the master and the slave much have
>> each
>> >> > others machines defined in it. Refer to my previous mail
>> >> > Mithila
>> >> >
>> >> >
>> >>
>> >>
>> >> I have SSH setup correctly and here is the /etc/hosts file on node6 of
>> >> the
>> >> datanodes.
>> >>
>> >> #  
>> >> 127.0.0.1   localhost.localdomain   localhost node6
>> >> 192.168.1.10master
>> >> 192.168.1.1 node1
>> >> 192.168.1.2 node2
>> >> 192.168.1.3 node3
>> >> 192.168.1.4 node4
>> >> 192.168.1.5 node5
>> >> 192.168.1.6 node6
>> >>
>> >> I have the slaves file on each machine set as node1 to node6, and each
>> >> masters file set to master except for the master itself.  Still, I
>> keep
>> >> getting that same error in the datanodes...
>> >> --
>> >> View this message in context:
>> >> http://www.nabble.com/Datanode-Setup-tp23064660p23101738.html
>> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>> >>
>> >>
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Datanode-Setup-tp23064660p23203293.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Datanode-Setup-tp23064660p23208349.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Datanode Setup

2009-04-23 Thread jason hadoop
Have you copied the updated hadoop-site.xml file to the conf directory on
all of your slave nodes?


On Thu, Apr 23, 2009 at 2:10 PM, jpe30  wrote:

>
> Ok, I've done all of this.  Set up my hosts file in Linux, setup my master
> and slaves file in Hadoop and setup my hadoop-site.xml.  It still does not
> work.  The datanode still gives me this error...
>
> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
>
> ...which makes me think its not reading the hadoop-site.xml file at all.
> I've checked the permissions and the user has full permissions to all files
> within the Hadoop directory.  Any suggestions?
>
>
>
> Mithila Nagendra wrote:
> >
> > You should have conf/slaves file on the master node set to master,
> node01,
> > node02. so on and the masters file on master set to master. Also in
> > the
> > /etc/hosts file get rid of 'node6' in the line 127.0.0.1
> > localhost.localdomain   localhost node6 on all your nodes. Ensure that
> the
> > /etc/hosts file contain the same information on all nodes. Also
> > hadoop-site.xml files on all nodes should have master:portno for hdfs and
> > tasktracker.
> > Once you do this restart hadoop.
> >
> > On Fri, Apr 17, 2009 at 10:04 AM, jpe30  wrote:
> >
> >>
> >>
> >>
> >> Mithila Nagendra wrote:
> >> >
> >> > You have to make sure that you can ssh between the nodes. Also check
> >> the
> >> > file hosts in /etc folder. Both the master and the slave much have
> each
> >> > others machines defined in it. Refer to my previous mail
> >> > Mithila
> >> >
> >> >
> >>
> >>
> >> I have SSH setup correctly and here is the /etc/hosts file on node6 of
> >> the
> >> datanodes.
> >>
> >> #  
> >> 127.0.0.1   localhost.localdomain   localhost node6
> >> 192.168.1.10master
> >> 192.168.1.1 node1
> >> 192.168.1.2 node2
> >> 192.168.1.3 node3
> >> 192.168.1.4 node4
> >> 192.168.1.5 node5
> >> 192.168.1.6 node6
> >>
> >> I have the slaves file on each machine set as node1 to node6, and each
> >> masters file set to master except for the master itself.  Still, I keep
> >> getting that same error in the datanodes...
> >> --
> >> View this message in context:
> >> http://www.nabble.com/Datanode-Setup-tp23064660p23101738.html
> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Datanode-Setup-tp23064660p23203293.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422


Re: Datanode Setup

2009-04-23 Thread jpe30

Ok, I've done all of this.  Set up my hosts file in Linux, setup my master
and slaves file in Hadoop and setup my hadoop-site.xml.  It still does not
work.  The datanode still gives me this error...

STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost 

...which makes me think its not reading the hadoop-site.xml file at all. 
I've checked the permissions and the user has full permissions to all files
within the Hadoop directory.  Any suggestions?



Mithila Nagendra wrote:
> 
> You should have conf/slaves file on the master node set to master, node01,
> node02. so on and the masters file on master set to master. Also in
> the
> /etc/hosts file get rid of 'node6' in the line 127.0.0.1
> localhost.localdomain   localhost node6 on all your nodes. Ensure that the
> /etc/hosts file contain the same information on all nodes. Also
> hadoop-site.xml files on all nodes should have master:portno for hdfs and
> tasktracker.
> Once you do this restart hadoop.
> 
> On Fri, Apr 17, 2009 at 10:04 AM, jpe30  wrote:
> 
>>
>>
>>
>> Mithila Nagendra wrote:
>> >
>> > You have to make sure that you can ssh between the nodes. Also check
>> the
>> > file hosts in /etc folder. Both the master and the slave much have each
>> > others machines defined in it. Refer to my previous mail
>> > Mithila
>> >
>> >
>>
>>
>> I have SSH setup correctly and here is the /etc/hosts file on node6 of
>> the
>> datanodes.
>>
>> #  
>> 127.0.0.1   localhost.localdomain   localhost node6
>> 192.168.1.10master
>> 192.168.1.1 node1
>> 192.168.1.2 node2
>> 192.168.1.3 node3
>> 192.168.1.4 node4
>> 192.168.1.5 node5
>> 192.168.1.6 node6
>>
>> I have the slaves file on each machine set as node1 to node6, and each
>> masters file set to master except for the master itself.  Still, I keep
>> getting that same error in the datanodes...
>> --
>> View this message in context:
>> http://www.nabble.com/Datanode-Setup-tp23064660p23101738.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Datanode-Setup-tp23064660p23203293.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Datanode Setup

2009-04-17 Thread Mithila Nagendra
You should have conf/slaves file on the master node set to master, node01,
node02. so on and the masters file on master set to master. Also in the
/etc/hosts file get rid of 'node6' in the line 127.0.0.1
localhost.localdomain   localhost node6 on all your nodes. Ensure that the
/etc/hosts file contain the same information on all nodes. Also
hadoop-site.xml files on all nodes should have master:portno for hdfs and
tasktracker.
Once you do this restart hadoop.

On Fri, Apr 17, 2009 at 10:04 AM, jpe30  wrote:

>
>
>
> Mithila Nagendra wrote:
> >
> > You have to make sure that you can ssh between the nodes. Also check the
> > file hosts in /etc folder. Both the master and the slave much have each
> > others machines defined in it. Refer to my previous mail
> > Mithila
> >
> >
>
>
> I have SSH setup correctly and here is the /etc/hosts file on node6 of the
> datanodes.
>
> #  
> 127.0.0.1   localhost.localdomain   localhost node6
> 192.168.1.10master
> 192.168.1.1 node1
> 192.168.1.2 node2
> 192.168.1.3 node3
> 192.168.1.4 node4
> 192.168.1.5 node5
> 192.168.1.6 node6
>
> I have the slaves file on each machine set as node1 to node6, and each
> masters file set to master except for the master itself.  Still, I keep
> getting that same error in the datanodes...
> --
> View this message in context:
> http://www.nabble.com/Datanode-Setup-tp23064660p23101738.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


Re: Datanode Setup

2009-04-17 Thread jpe30



Mithila Nagendra wrote:
> 
> You have to make sure that you can ssh between the nodes. Also check the
> file hosts in /etc folder. Both the master and the slave much have each
> others machines defined in it. Refer to my previous mail
> Mithila
> 
> 


I have SSH setup correctly and here is the /etc/hosts file on node6 of the
datanodes.

#  
127.0.0.1   localhost.localdomain   localhost node6
192.168.1.10master
192.168.1.1 node1
192.168.1.2 node2
192.168.1.3 node3
192.168.1.4 node4
192.168.1.5 node5
192.168.1.6 node6

I have the slaves file on each machine set as node1 to node6, and each
masters file set to master except for the master itself.  Still, I keep
getting that same error in the datanodes...
-- 
View this message in context: 
http://www.nabble.com/Datanode-Setup-tp23064660p23101738.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Datanode Setup

2009-04-17 Thread Mithila Nagendra
You have to make sure that you can ssh between the nodes. Also check the
file hosts in /etc folder. Both the master and the slave much have each
others machines defined in it. Refer to my previous mail
Mithila

On Fri, Apr 17, 2009 at 7:18 PM, jpe30  wrote:

>
> ok, I have my hosts file setup the way you told me, I changed my
> replication
> factor to 1.  The thing that I don't get is this line from the datanodes...
>
> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
>
> If I have my hadoop-site.xml setup correctly, with the correct address it
> should work right?  It seems like the datanodes aren't getting an IP
> address
> to use, and I'm not sure why.
>
>
> jpe30 wrote:
> >
> > That helps a lot actually.  I will try setting up my hosts file tomorrow
> > and make the other changes you suggested.
> >
> > Thanks!
> >
> >
> >
> > Mithila Nagendra wrote:
> >>
> >> Hi,
> >> The replication factor has to be set to 1. Also for you dfs and job
> >> tracker
> >> configuration you should insert the name of the node rather than the i.p
> >> address.
> >>
> >> For instance:
> >>  192.168.1.10:54310
> >>
> >> can be:
> >>
> >>  master:54310
> >>
> >> The nodes can be renamed by renaming them in the hosts files in /etc
> >> folder.
> >> It should look like the following:
> >>
> >> # Do not remove the following line, or various programs
> >> # that require network functionality will fail.
> >> 127.0.0.1   localhost.localdomain   localhost   node01
> >> 192.168.0.1 node01
> >> 192.168.0.2 node02
> >> 192.168.0.3 node03
> >>
> >> Hope this helps
> >> Mithila
> >>
> >> On Wed, Apr 15, 2009 at 9:40 PM, jpe30  wrote:
> >>
> >>>
> >>> I'm setting up a Hadoop cluster and I have the name node and job
> tracker
> >>> up
> >>> and running.  However, I cannot get any of my datanodes or tasktrackers
> >>> to
> >>> start.  Here is my hadoop-site.xml file...
> >>>
> >>>
> >>>
> >>> 
> >>> 
> >>>
> >>> 
> >>>
> >>> 
> >>>
> >>> 
> >>>  hadoop.tmp.dir
> >>>  /home/hadoop/h_temp
> >>>  A base for other temporary directories.
> >>> 
> >>>
> >>> 
> >>>  dfs.data.dir
> >>>  /home/hadoop/data
> >>> 
> >>>
> >>> 
> >>>  fs.default.name
> >>>   192.168.1.10:54310
> >>>  The name of the default file system.  A URI whose
> >>>   scheme and authority determine the FileSystem implementation.  The
> >>>  uri's scheme determines the config property (fs.SCHEME.impl) naming
> >>>  the FileSystem implementation class.  The uri's authority is used to
> >>>   determine the host, port, etc. for a filesystem.
> >>>  true
> >>> 
> >>>
> >>> 
> >>>  mapred.job.tracker
> >>>   192.168.1.10:54311
> >>>  The host and port that the MapReduce job tracker runs
> >>>   at.  If "local", then jobs are run in-process as a single map
> >>>  and reduce task.
> >>>   
> >>> 
> >>>
> >>> 
> >>>  dfs.replication
> >>>  0
> >>>   Default block replication.
> >>>   The actual number of replications can be specified when the file is
> >>> created.
> >>>  The default is used if replication is not specified in create time.
> >>>   
> >>> 
> >>>
> >>> 
> >>>
> >>>
> >>> and here is the error I'm getting...
> >>>
> >>>
> >>>
> >>>
> >>> 2009-04-15 14:00:48,208 INFO org.apache.hadoop.dfs.DataNode:
> >>> STARTUP_MSG:
> >>> /
> >>> STARTUP_MSG: Starting DataNode
> >>> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.18.3
> >>> STARTUP_MSG:   build =
> >>> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
> >>> 736250;
> >>> compiled by 'ndaley' on Thu Jan 22 23:12

Re: Datanode Setup

2009-04-17 Thread jpe30

ok, I have my hosts file setup the way you told me, I changed my replication
factor to 1.  The thing that I don't get is this line from the datanodes...

STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost

If I have my hadoop-site.xml setup correctly, with the correct address it
should work right?  It seems like the datanodes aren't getting an IP address
to use, and I'm not sure why.


jpe30 wrote:
> 
> That helps a lot actually.  I will try setting up my hosts file tomorrow
> and make the other changes you suggested.
> 
> Thanks!
> 
> 
> 
> Mithila Nagendra wrote:
>> 
>> Hi,
>> The replication factor has to be set to 1. Also for you dfs and job
>> tracker
>> configuration you should insert the name of the node rather than the i.p
>> address.
>> 
>> For instance:
>>  192.168.1.10:54310
>> 
>> can be:
>> 
>>  master:54310
>> 
>> The nodes can be renamed by renaming them in the hosts files in /etc
>> folder.
>> It should look like the following:
>> 
>> # Do not remove the following line, or various programs
>> # that require network functionality will fail.
>> 127.0.0.1   localhost.localdomain   localhost   node01
>> 192.168.0.1 node01
>> 192.168.0.2 node02
>> 192.168.0.3 node03
>> 
>> Hope this helps
>> Mithila
>> 
>> On Wed, Apr 15, 2009 at 9:40 PM, jpe30  wrote:
>> 
>>>
>>> I'm setting up a Hadoop cluster and I have the name node and job tracker
>>> up
>>> and running.  However, I cannot get any of my datanodes or tasktrackers
>>> to
>>> start.  Here is my hadoop-site.xml file...
>>>
>>>
>>>
>>> 
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>  hadoop.tmp.dir
>>>  /home/hadoop/h_temp
>>>  A base for other temporary directories.
>>> 
>>>
>>> 
>>>  dfs.data.dir
>>>  /home/hadoop/data
>>> 
>>>
>>> 
>>>  fs.default.name
>>>   192.168.1.10:54310
>>>  The name of the default file system.  A URI whose
>>>   scheme and authority determine the FileSystem implementation.  The
>>>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>  the FileSystem implementation class.  The uri's authority is used to
>>>   determine the host, port, etc. for a filesystem.
>>>  true
>>> 
>>>
>>> 
>>>  mapred.job.tracker
>>>   192.168.1.10:54311
>>>  The host and port that the MapReduce job tracker runs
>>>   at.  If "local", then jobs are run in-process as a single map
>>>  and reduce task.
>>>   
>>> 
>>>
>>> 
>>>  dfs.replication
>>>  0
>>>   Default block replication.
>>>   The actual number of replications can be specified when the file is
>>> created.
>>>  The default is used if replication is not specified in create time.
>>>   
>>> 
>>>
>>> 
>>>
>>>
>>> and here is the error I'm getting...
>>>
>>>
>>>
>>>
>>> 2009-04-15 14:00:48,208 INFO org.apache.hadoop.dfs.DataNode:
>>> STARTUP_MSG:
>>> /
>>> STARTUP_MSG: Starting DataNode
>>> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.18.3
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
>>> 736250;
>>> compiled by 'ndaley' on Thu Jan 22 23:12:0$
>>> /
>>> 2009-04-15 14:00:48,355 ERROR org.apache.hadoop.dfs.DataNode:
>>> java.net.UnknownHostException: myhost: myhost
>>>at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
>>>at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
>>>at
>>> org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:249)
>>> at org.apache.hadoop.dfs.DataNode.(DataNode.java:223)
>>> at
>>> org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
>>>at
>>> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
>>>at
>>> org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
>>>at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)
>>>
>>> 2009-04-15 14:00:48,356 INFO org.apache.hadoop.dfs.DataNode:
>>> SHUTDOWN_MSG:
>>> /
>>> SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException:
>>> myhost: myhost
>>> /
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Datanode-Setup-tp23064660p23064660.html
>>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>>
>>>
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Datanode-Setup-tp23064660p23100910.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Datanode Setup

2009-04-15 Thread jpe30

That helps a lot actually.  I will try setting up my hosts file tomorrow and
make the other changes you suggested.

Thanks!



Mithila Nagendra wrote:
> 
> Hi,
> The replication factor has to be set to 1. Also for you dfs and job
> tracker
> configuration you should insert the name of the node rather than the i.p
> address.
> 
> For instance:
>  192.168.1.10:54310
> 
> can be:
> 
>  master:54310
> 
> The nodes can be renamed by renaming them in the hosts files in /etc
> folder.
> It should look like the following:
> 
> # Do not remove the following line, or various programs
> # that require network functionality will fail.
> 127.0.0.1   localhost.localdomain   localhost   node01
> 192.168.0.1 node01
> 192.168.0.2 node02
> 192.168.0.3 node03
> 
> Hope this helps
> Mithila
> 
> On Wed, Apr 15, 2009 at 9:40 PM, jpe30  wrote:
> 
>>
>> I'm setting up a Hadoop cluster and I have the name node and job tracker
>> up
>> and running.  However, I cannot get any of my datanodes or tasktrackers
>> to
>> start.  Here is my hadoop-site.xml file...
>>
>>
>>
>> 
>> 
>>
>> 
>>
>> 
>>
>> 
>>  hadoop.tmp.dir
>>  /home/hadoop/h_temp
>>  A base for other temporary directories.
>> 
>>
>> 
>>  dfs.data.dir
>>  /home/hadoop/data
>> 
>>
>> 
>>  fs.default.name
>>   192.168.1.10:54310
>>  The name of the default file system.  A URI whose
>>   scheme and authority determine the FileSystem implementation.  The
>>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>>  the FileSystem implementation class.  The uri's authority is used to
>>   determine the host, port, etc. for a filesystem.
>>  true
>> 
>>
>> 
>>  mapred.job.tracker
>>   192.168.1.10:54311
>>  The host and port that the MapReduce job tracker runs
>>   at.  If "local", then jobs are run in-process as a single map
>>  and reduce task.
>>   
>> 
>>
>> 
>>  dfs.replication
>>  0
>>   Default block replication.
>>   The actual number of replications can be specified when the file is
>> created.
>>  The default is used if replication is not specified in create time.
>>   
>> 
>>
>> 
>>
>>
>> and here is the error I'm getting...
>>
>>
>>
>>
>> 2009-04-15 14:00:48,208 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
>> /
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.18.3
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
>> 736250;
>> compiled by 'ndaley' on Thu Jan 22 23:12:0$
>> /
>> 2009-04-15 14:00:48,355 ERROR org.apache.hadoop.dfs.DataNode:
>> java.net.UnknownHostException: myhost: myhost
>>at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
>>at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
>>at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:249)
>> at org.apache.hadoop.dfs.DataNode.(DataNode.java:223)
>> at
>> org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
>>at
>> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
>>at
>> org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
>>at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)
>>
>> 2009-04-15 14:00:48,356 INFO org.apache.hadoop.dfs.DataNode:
>> SHUTDOWN_MSG:
>> /
>> SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException:
>> myhost: myhost
>> /
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Datanode-Setup-tp23064660p23064660.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Datanode-Setup-tp23064660p23065220.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Datanode Setup

2009-04-15 Thread Mithila Nagendra
Hi,
The replication factor has to be set to 1. Also for you dfs and job tracker
configuration you should insert the name of the node rather than the i.p
address.

For instance:
 192.168.1.10:54310

can be:

 master:54310

The nodes can be renamed by renaming them in the hosts files in /etc folder.
It should look like the following:

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1   localhost.localdomain   localhost   node01
192.168.0.1 node01
192.168.0.2 node02
192.168.0.3 node03

Hope this helps
Mithila

On Wed, Apr 15, 2009 at 9:40 PM, jpe30  wrote:

>
> I'm setting up a Hadoop cluster and I have the name node and job tracker up
> and running.  However, I cannot get any of my datanodes or tasktrackers to
> start.  Here is my hadoop-site.xml file...
>
>
>
> 
> 
>
> 
>
> 
>
> 
>  hadoop.tmp.dir
>  /home/hadoop/h_temp
>  A base for other temporary directories.
> 
>
> 
>  dfs.data.dir
>  /home/hadoop/data
> 
>
> 
>  fs.default.name
>   192.168.1.10:54310
>  The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.
>  true
> 
>
> 
>  mapred.job.tracker
>   192.168.1.10:54311
>  The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>   
> 
>
> 
>  dfs.replication
>  0
>   Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>  The default is used if replication is not specified in create time.
>   
> 
>
> 
>
>
> and here is the error I'm getting...
>
>
>
>
> 2009-04-15 14:00:48,208 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.18.3
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
> 736250;
> compiled by 'ndaley' on Thu Jan 22 23:12:0$
> /
> 2009-04-15 14:00:48,355 ERROR org.apache.hadoop.dfs.DataNode:
> java.net.UnknownHostException: myhost: myhost
>at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
>at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
>at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:249)
> at org.apache.hadoop.dfs.DataNode.(DataNode.java:223)
> at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
>at
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
>at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
>at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)
>
> 2009-04-15 14:00:48,356 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException:
> myhost: myhost
> /
>
> --
> View this message in context:
> http://www.nabble.com/Datanode-Setup-tp23064660p23064660.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


Datanode Setup

2009-04-15 Thread jpe30

I'm setting up a Hadoop cluster and I have the name node and job tracker up
and running.  However, I cannot get any of my datanodes or tasktrackers to
start.  Here is my hadoop-site.xml file...









  hadoop.tmp.dir
  /home/hadoop/h_temp
  A base for other temporary directories.



  dfs.data.dir
  /home/hadoop/data



  fs.default.name
  192.168.1.10:54310
  The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.
  true



  mapred.job.tracker
  192.168.1.10:54311
  The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  



  dfs.replication
  0
  Default block replication.
  The actual number of replications can be specified when the file is
created.
  The default is used if replication is not specified in create time.
  






and here is the error I'm getting...

2009-04-15 14:00:48,208 INFO org.apache.hadoop.dfs.DataNode:
STARTUP_MSG:
/
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = java.net.UnknownHostException: myhost: myhost
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.3
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 736250;
compiled by 'ndaley' on Thu Jan 22 23:12:0$
/
2009-04-15 14:00:48,355 ERROR org.apache.hadoop.dfs.DataNode:
java.net.UnknownHostException: myhost: myhost
at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:249)
at org.apache.hadoop.dfs.DataNode.(DataNode.java:223)
at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
at
org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)

2009-04-15 14:00:48,356 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException:
myhost: myhost
/

-- 
View this message in context: 
http://www.nabble.com/Datanode-Setup-tp23064660p23064660.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.