It seems that is due to spark  SPARK_LOCAL_IP setting.export 
SPARK_LOCAL_IP=localhost 
will not work.
Then, how it would be set.
Thank you all~~ 
 


     On Friday, September 25, 2015 5:57 PM, Zhiliang Zhu 
<zchl.j...@yahoo.com.INVALID> wrote:
   

 Hi Steve,
Thanks a lot for your reply.
That is, some commands could work on the remote server gateway installed , but 
some other commands will not work.As expected, the remote machine is not in the 
same area network as the cluster, and the cluster's portis forbidden.
While I make the remote machine gateway for another local area cluster, it 
works fine, and the hadoopjob could be submitted on the machine remotedly.
However, I want to submit spark jobs remotely as hadoop jobs do ....In the 
gateway machine, I also copied the spark install directory from the cluster to 
it, conf/spark-env.shis also there. But I fail to submit spark job 
remotely...The error messages:
15/09/25 17:47:47 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/09/25 17:47:47 INFO Remoting: Starting remoting
15/09/25 17:47:48 ERROR netty.NettyTransport: failed to bind to 
/220.250.64.225:0, shutting down Netty transport
15/09/25 17:47:48 WARN util.Utils: Service 'sparkDriver' could not bind on port 
0. Attempting port 1.
15/09/25 17:47:48 INFO remote.RemoteActorRefProvider$RemotingTerminator: 
Shutting down remote daemon.
15/09/25 17:47:48 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote 
daemon shut down; proceeding with flushing remote transports.
15/09/25 17:47:48 INFO remote.RemoteActorRefProvider$RemotingTerminator: 
Remoting shut down.

...
Would you help some about it ...
Thank you very much!Zhiliang 

 


     On Friday, September 25, 2015 5:21 PM, Steve Loughran 
<ste...@hortonworks.com> wrote:
   

 

On 25 Sep 2015, at 05:25, Zhiliang Zhu <zchl.j...@yahoo.com.INVALID> wrote:

However, I just could use "hadoop fs -ls/-mkdir/-rm XXX" commands to operate at 
the remote machine with gateway, 



which means the namenode is reachable; all those commands only need to interact 
with it.

but commands "hadoop fs -cat/-put XXX    YYY" would not work with error message 
as below:
put: File /user/zhuzl/wordcount/input/1._COPYING_ could only be replicated to 0 
nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 
node(s) are excluded in this operation.
15/09/25 10:44:00 INFO hdfs.DFSClient: Exception in createBlockOutputStream
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/10.6.28.96:50010]


the client can't reach the datanodes

   

  

Reply via email to