Hi, akshay

Can you successfully execute command such as "hadoop fs -ls 
hdfs://987.65.43.21:8020/" on the machine you ran distcp? I think the answer is 
no.
If both servers are reachable, I think you should check conf 
dfs.namenode.rpc-address or dfs.namenode.rpc-address.<nameservice>.nn1 (if HA 
enabled) from hdfs-site.xml to fetch the correct rpc port.

Best,
Tao Yang

> 在 2019年5月24日,下午3:20,akshay naidu <akshaynaid...@gmail.com> 写道:
> 
> Hello Joey,
> Just to understand distcp I am trying to copy one file. Otherwise the data is 
> to be copied is > 1.5TB .
> Anyways I tried running -cp but looks like the issue is in connectivity. See 
> logs :-
> hdfs dfs -cp hdfs:// 
> 123.45.67.89:54310/data-analytics/spike/beNginxLogs/today/123.45.67.89_2019-05-22.access.log.gz
>  
> <http://123.45.67.89:54310/data-analytics/spike/beNginxLogs/today/123.45.67.89_2019-05-22.access.log.gz>
>  hdfs://987.65.43.21:50070/distCp/
> 19/05/24 07:15:22 INFO ipc.Client: Retrying connect to server: 
> li868-219.members.linode.com/ <http://li868-219.members.linode.com/> 
> 987.65.43.21:50070. Already tried 0 time(s); maxRetries=45
> 19/05/24 07:15:42 INFO ipc.Client: Retrying connect to server: 
> li868-219.members.linode.com/ <http://li868-219.members.linode.com/> 
> 987.65.43.21:50070. Already tried 1 time(s); maxRetries=45
> 19/05/24 07:16:02 INFO ipc.Client: Retrying connect to server: 
> li868-219.members.linode.com/ <http://li868-219.members.linode.com/> 
> 987.65.43.21:50070. Already tried 2 time(s); maxRetries=45
> .
> .
> facing same issue.
> Any Idea?
> Thanks. Regards
> 
> On Fri, May 24, 2019 at 8:10 AM Joey Krabacher <jkrabac...@gmail.com 
> <mailto:jkrabac...@gmail.com>> wrote:
> It looks like you're just trying to copy 1 file?
> Why not use 'hdfs dfs -cp ...' instead?
> 
> On Thu, May 23, 2019, 21:22 yangtao.yt <http://yangtao.yt/> 
> <yangtao...@alibaba-inc.com <mailto:yangtao...@alibaba-inc.com>> wrote:
> Hi, akshay
> 
> Seems it’s not distcp’s business, SocketTimeout exceptions may be caused by 
> network unreachable or unavailable remote server, you can communicate with 
> the target hdfs cluster directly on the machine you executed distcp command 
> to have a test.
> Fully causes and suggestions given by community can be fetched from here: 
> https://wiki.apache.org/hadoop/SocketTimeout 
> <https://wiki.apache.org/hadoop/SocketTimeout>
> 
> There is a doubt in your distcp command, why using port 50070 (http port) 
> instead of 8020 (rpc port) for the target hdfs cluster? I’m confusing that it 
> still can connect with 8020 according to your logs.
> 
> Best,
> Tao Yang
> 
>> 在 2019年5月23日,下午8:54,akshay naidu <akshaynaid...@gmail.com 
>> <mailto:akshaynaid...@gmail.com>> 写道:
>> 
>> sun.reflect.NativeConstructorAccessorImpl.newInstance0
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to