Is the conf you checked belonging to 987.65.43.21?  If no rpc-address 
configured in hdfs-site.xml, 8020 should be taken as the default rpc port.
I think you should check whether there is an available namenode process running 
on 987.65.43.21 at first then find the right port, It'll be very easy if you 
can log onto that machine.

> 在 2019年5月24日,下午4:10,akshay naidu <akshaynaid...@gmail.com> 写道:
> 
> Yes,  "hadoop fs -ls hdfs://987.65.43.21:8020/ <>" this is not working.
> 
> I check ping  987.65.43.21 which is working fine but  <>
> 
>  <>
> telnet  <>987.65.43.21 8020(also tried port 80, 50070) is throwing error 
> telnet: Unable to connect to remote host: Connection timed out <>
> I think this is where the problem is. <>
> 
>  <>
> 
>  <>
> Also checked the conf folder.
> There isn't any property related to  <>rpc-address in hdfs-site.xml . And the 
> cluster is non-HA.
> 
> On Fri, May 24, 2019 at 1:18 PM yangtao.yt <http://yangtao.yt/> 
> <yangtao...@alibaba-inc.com <mailto:yangtao...@alibaba-inc.com>> wrote:
> Hi, akshay
> 
> Can you successfully execute command such as "hadoop fs -ls 
> hdfs://987.65.43.21:8020/ <>" on the machine you ran distcp? I think the 
> answer is no.
> If both servers are reachable, I think you should check conf 
> dfs.namenode.rpc-address or dfs.namenode.rpc-address.<nameservice>.nn1 (if HA 
> enabled) from hdfs-site.xml to fetch the correct rpc port.
> 
> Best,
> Tao Yang
> 
>> 在 2019年5月24日,下午3:20,akshay naidu <akshaynaid...@gmail.com 
>> <mailto:akshaynaid...@gmail.com>> 写道:
>> 
>> Hello Joey,
>> Just to understand distcp I am trying to copy one file. Otherwise the data 
>> is to be copied is > 1.5TB .
>> Anyways I tried running -cp but looks like the issue is in connectivity. See 
>> logs :-
>> hdfs dfs -cp hdfs:// 
>> 123.45.67.89:54310/data-analytics/spike/beNginxLogs/today/123.45.67.89_2019-05-22.access.log.gz
>>  
>> <http://123.45.67.89:54310/data-analytics/spike/beNginxLogs/today/123.45.67.89_2019-05-22.access.log.gz>
>>  hdfs://987.65.43.21:50070/distCp/ <>
>> 19/05/24 07:15:22 INFO ipc.Client: Retrying connect to server: 
>> li868-219.members.linode.com/ <http://li868-219.members.linode.com/> 
>> 987.65.43.21:50070. Already tried 0 time(s); maxRetries=45
>> 19/05/24 07:15:42 INFO ipc.Client: Retrying connect to server: 
>> li868-219.members.linode.com/ <http://li868-219.members.linode.com/> 
>> 987.65.43.21:50070. Already tried 1 time(s); maxRetries=45
>> 19/05/24 07:16:02 INFO ipc.Client: Retrying connect to server: 
>> li868-219.members.linode.com/ <http://li868-219.members.linode.com/> 
>> 987.65.43.21:50070. Already tried 2 time(s); maxRetries=45
>> .
>> .
>> facing same issue.
>> Any Idea?
>> Thanks. Regards
>> 
>> On Fri, May 24, 2019 at 8:10 AM Joey Krabacher <jkrabac...@gmail.com 
>> <mailto:jkrabac...@gmail.com>> wrote:
>> It looks like you're just trying to copy 1 file?
>> Why not use 'hdfs dfs -cp ...' instead?
>> 
>> On Thu, May 23, 2019, 21:22 yangtao.yt <http://yangtao.yt/> 
>> <yangtao...@alibaba-inc.com <mailto:yangtao...@alibaba-inc.com>> wrote:
>> Hi, akshay
>> 
>> Seems it’s not distcp’s business, SocketTimeout exceptions may be caused by 
>> network unreachable or unavailable remote server, you can communicate with 
>> the target hdfs cluster directly on the machine you executed distcp command 
>> to have a test.
>> Fully causes and suggestions given by community can be fetched from here: 
>> https://wiki.apache.org/hadoop/SocketTimeout 
>> <https://wiki.apache.org/hadoop/SocketTimeout>
>> 
>> There is a doubt in your distcp command, why using port 50070 (http port) 
>> instead of 8020 (rpc port) for the target hdfs cluster? I’m confusing that 
>> it still can connect with 8020 according to your logs.
>> 
>> Best,
>> Tao Yang
>> 
>>> 在 2019年5月23日,下午8:54,akshay naidu <akshaynaid...@gmail.com 
>>> <mailto:akshaynaid...@gmail.com>> 写道:
>>> 
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0
>> 
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to