Ayon,

Thanks for your iinformation, would you able to share your test connection
code with me? Also, the problem I described does not seems to be a problem
if I run the same code in the cluster, so I think there must be a
configuration parameter or some nobs that I can turn to make namenode serve
files to client from different network.

Felix

On Mon, Jan 31, 2011 at 2:03 PM, Ayon Sinha <ayonsi...@yahoo.com> wrote:

> Also, be careful about this when you try to connect to HDFS and it doesn't
> respond. There was a place in the code where it was hard-coded to rety *45
> times* when there was a socketConnectExpectption trying every *15 secs. *It
> was not (at least on 0.18 version code I looked at) honoring the
> configuration max connect retries.
>
> My workaround was to wrap the code in a test connection code before
> actually giving control to HDFS to connect.
>
> -Ayon
>
>
> ------------------------------
> *From:* felix gao <gre1...@gmail.com>
> *To:* hdfs-user@hadoop.apache.org
> *Sent:* Mon, January 31, 2011 1:54:41 PM
> *Subject:* Re: Configure NameNode to accept connection from external ips
>
> I am trying to create a client that talks to hdfs and I am having the
> following problems
>
> ipc.Client: Retrying connect to server:
> hm01.xxx.xxx.com/xx.xxx.xxx.176:50001. Already tried 0 time(s).
>
> The hm01 is running namenode and tasktracker if connecting to it with
> internal ip ranges from 192.168.100.1 to 192.168.100.255. However, my client
> sits in a complete different network.  What do I need to configure to make
> the namenode serving my client that initiates requests from different
> network.
>
> here is the core-site.xml is configured for namenode on my client
> <property>
>                 <name>fs.default.name</name>
>                 <value>hdfs://hm01.xxx.xxx.com:50001</value>
>   </property>
>
> Thanks,
>
> Felix
>
>
>
> On Tue, Jan 25, 2011 at 2:24 PM, felix gao <gre1...@gmail.com> wrote:
>
>> Hi guys,
>>
>> I have a small cluster that each machine have two NICs one is configured
>> with external IP and another is configured with internal IP.  Right now all
>> the machines are communicating with each other via the internal IP.  I want
>> to configure the namenode to also accept connection via its external ip
>> (white listed IPs).  I am not sure how to do that.  I have a copy of the
>> slaves's conf files in my local computer that sits outside of the cluster
>> network and when I do hadoop fs -ls /user it does not connect to HDFS.
>>
>> Thanks,
>>
>> Felix
>>
>
>
>

Reply via email to