集群里的机器互相访问配的是内网地址呗,你这得开内网访问...
在 2021-08-20 18:56:58,"杨帅统" <itcc...@163.com> 写道: > > > > > > >test.gl.cdh.node1 对应的是远程服务器外网地址 139.9.132.* >192.168.0.32:9866是139.9.132.*机器的同一内网下的另一台内网地址 为啥会返回内网地址啊。。。 > > > > > > > > > > > > > > >在 2021-08-20 18:28:34,"东东" <dongdongking...@163.com> 写道: >>这不很清楚么,连 192.168.0.32:9866 超时啊 >> >> >> >> >>在 2021-08-20 18:13:10,"杨帅统" <itcc...@163.com> 写道: >>>// 开启checkpoint >>>env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE); >>> >>>env.getCheckpointConfig().setCheckpointStorage("hdfs://test.gl.cdh.node1:8020/flink/flink-cdc-demo"); >>>System.setProperty("HADOOP_USER_NAME", "root"); >>> >>> >>>报错信息如下: >>>org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while >>>waiting for channel to be ready for connect. ch : >>>java.nio.channels.SocketChannel[connection-pending remote=/192.168.0.32:9866] >>>at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534) >>>at >>>org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:253) >>>at >>>org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1725) >>>at >>>org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679) >>>at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716) >>>2021-08-20 18:11:49 WARN DataStreamer:1683 - Abandoning >>>BP-1233461189-192.168.0.27-1615278916172:blk_1074013541_272740 >>>2021-08-20 18:11:50 WARN DataStreamer:1688 - Excluding datanode >>>DatanodeInfoWithStorage[192.168.0.32:9866,DS-f45126b3-f020-473f-b25f-1b37f8540eb7,DISK] >>>2021-08-20 18:11:50 WARN DataStreamer:826 - DataStreamer Exception >>>org.apache.hadoop.ipc.RemoteException(java.io.IOException): File >>>/flink/flink-cdc-demo/13714486ceb74d650ba104df7b202920/chk-1/_metadata could >>>only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) >>>running and 3 node(s) are excluded in this operation. >>>at >>>org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2102) >>>at >>>org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294) >>>at >>>org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2673)