[ 
https://issues.apache.org/jira/browse/FLINK-36356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886310#comment-17886310
 ] 

Piotr Nowojski commented on FLINK-36356:
----------------------------------------

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62757&view=logs&j=2e8cb2f7-b2d3-5c62-9c05-cd756d33a819&t=2dd510a3-5041-5201-6dc3-54d310f68906

> HadoopRecoverableWriterTest.testRecoverWithState due to IOException
> -------------------------------------------------------------------
>
>                 Key: FLINK-36356
>                 URL: https://issues.apache.org/jira/browse/FLINK-36356
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Hadoop Compatibility
>    Affects Versions: 2.0-preview
>            Reporter: Matthias Pohl
>            Priority: Critical
>              Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62378&view=logs&j=2e8cb2f7-b2d3-5c62-9c05-cd756d33a819&t=2dd510a3-5041-5201-6dc3-54d310f68906&l=10514
> {code}
> Sep 23 07:55:16 07:55:16.451 [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 20.05 s <<< FAILURE! -- in 
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest
> Sep 23 07:55:16 07:55:16.451 [ERROR] 
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest.testRecoverWithState
>  -- Time elapsed: 2.694 s <<< ERROR!
> Sep 23 07:55:16 java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:45240,DS-13a30476-dff5-4f3a-88b1-887571521a95,DISK]]
>  are bad. Aborting...
> Sep 23 07:55:16       at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537)
> Sep 23 07:55:16       at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
> Sep 23 07:55:16       at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
> Sep 23 07:55:16       at 
> org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> {code}
> The Maven logs reveal a bit more (I attached the extract of the failed build):
> {code}
> 07:55:13,491 [DataXceiver for client DFSClient_NONMAPREDUCE_211593080_35 at 
> /127.0.0.1:59360 [Receiving block 
> BP-289839883-172.27.0.2-1727078098659:blk_1073741832_1016]] ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode              [] - 
> 127.0.0.1:46429:DataXceiver error processing WRITE_BLOCK operation  src: 
> /127.0.0.1:59360 dst: /127.0.0.1:46429
> java.nio.channels.ClosedByInterruptException: null
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  ~[?:1.8.0_292]
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:406) 
> ~[?:1.8.0_292]
>         at 
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
>  ~[hadoop-common-2.10.2.jar:?]
>         at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) 
> ~[hadoop-common-2.10.2.jar:?]
>         at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) 
> ~[hadoop-common-2.10.2.jar:?]
>         at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) 
> ~[hadoop-common-2.10.2.jar:?]
>         at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> ~[?:1.8.0_292]
>         at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) 
> ~[?:1.8.0_292]
>         at java.io.BufferedInputStream.read(BufferedInputStream.java:345) 
> ~[?:1.8.0_292]
>         at java.io.DataInputStream.read(DataInputStream.java:149) 
> ~[?:1.8.0_292]
>         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) 
> ~[hadoop-common-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
>  ~[hadoop-hdfs-client-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>  ~[hadoop-hdfs-client-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>  ~[hadoop-hdfs-client-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
>  ~[hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968)
>  ~[hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877)
>  ~[hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166)
>  ~[hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103)
>  ~[hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) 
> [hadoop-hdfs-2.10.2.jar:?]
>         at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
> 07:55:13,491 [DataXceiver for client DFSClient_NONMAPREDUCE_211593080_35 at 
> /127.0.0.1:39968 [Receiving block 
> BP-289839883-172.27.0.2-1727078098659:blk_1073741832_1016]] INFO  
> org.apache.hadoop.hdfs.server.datanode.DataNode              [] - Exception 
> for BP-289839883-172.27.0.2-1727078098659:blk_1073741832_1017
> java.io.IOException: Premature EOF from inputStream
>         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) 
> ~[hadoop-common-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
>  ~[hadoop-hdfs-client-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>  ~[hadoop-hdfs-client-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>  ~[hadoop-hdfs-client-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
>  ~[hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968)
>  [hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877)
>  [hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166)
>  [hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103)
>  [hadoop-hdfs-2.10.2.jar:?]
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) 
> [hadoop-hdfs-2.10.2.jar:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to