Hi, Mark

Try to add an extra property to that file, and try to examine if
hadoop recognizes it.
This way you can find out if hadoop uses your configuration file.

2009/2/10 Jeff Hammerbacher <ham...@cloudera.com>:
> Hey Mark,
>
> In NameNode.java, the DEFAULT_PORT specified for NameNode RPC is 8020.
> From my understanding of the code, your fs.default.name setting should
> have overridden this port to be 9000. It appears your Hadoop
> installation has not picked up the configuration settings
> appropriately. You might want to see if you have any Hadoop processes
> running and terminate them (bin/stop-all.sh should help) and then
> restart your cluster with the new configuration to see if that helps.
>
> Later,
> Jeff
>
> On Mon, Feb 9, 2009 at 9:48 PM, Amar Kamat <ama...@yahoo-inc.com> wrote:
>> Mark Kerzner wrote:
>>>
>>> Hi,
>>> Hi,
>>>
>>> why is hadoop suddenly telling me
>>>
>>>  Retrying connect to server: localhost/127.0.0.1:8020
>>>
>>> with this configuration
>>>
>>> <configuration>
>>>  <property>
>>>    <name>fs.default.name</name>
>>>    <value>hdfs://localhost:9000</value>
>>>  </property>
>>>  <property>
>>>    <name>mapred.job.tracker</name>
>>>    <value>localhost:9001</value>
>>>
>>
>> Shouldnt this be
>>
>> <value>hdfs://localhost:9001</value>
>>
>> Amar
>>>
>>>  </property>
>>>  <property>
>>>    <name>dfs.replication</name>
>>>    <value>1</value>
>>>  </property>
>>> </configuration>
>>>
>>> and both this http://localhost:50070/dfshealth.jsp and this
>>> http://localhost:50030/jobtracker.jsp links work fine?
>>>
>>> Thank you,
>>> Mark
>>>
>>>
>>
>>
>



-- 
M. Raşit ÖZDAŞ

Reply via email to