[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597070#comment-15597070 ] Yiqun Lin commented on HDFS-10730: -- Thanks, [~brahmareddy]. > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10730.001.patch, HDFS-10730.002.patch > > > In HDFS-10723, [~kihwal] suggested that > {quote} > it is not a good idea to hard-code or reuse the same port number in unit > tests. Because the jenkins slave can run multiple jobs at the same time. > {quote} > Then I collected some tests which failed by this reason in recent jenkin > buildings. > Finally I found these two failed test > {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/) > and > {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/). > The stack infos: > {code} > java.net.BindException: Problem binding to [localhost:57241] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:538) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:811) > at org.apache.hadoop.ipc.Server.(Server.java:2611) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278) > at > org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182) > {code} > {code} > java.net.BindException: Problem binding to [localhost:54191] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:530) > at org.apache.hadoop.ipc.Server.bind(Server.java:519) > at > org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:255) > {code} > We can make a change to update the param value for
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595049#comment-15595049 ] Hudson commented on HDFS-10730: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10653 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10653/]) HDFS-10730. Fix some failed tests due to BindException. Contributed by (brahma: rev f63cd78f6008bf7cfc9ee74217ed6f3d4f5bec5c) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10730.001.patch, HDFS-10730.002.patch > > > In HDFS-10723, [~kihwal] suggested that > {quote} > it is not a good idea to hard-code or reuse the same port number in unit > tests. Because the jenkins slave can run multiple jobs at the same time. > {quote} > Then I collected some tests which failed by this reason in recent jenkin > buildings. > Finally I found these two failed test > {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/) > and > {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/). > The stack infos: > {code} > java.net.BindException: Problem binding to [localhost:57241] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:538) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:811) > at org.apache.hadoop.ipc.Server.(Server.java:2611) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278) > at > org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182) > {code} > {code} > java.net.BindException: Problem binding to [localhost:54191] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:530) > at org.apache.hadoop.ipc.Server.bind(Server.java:519) > at > org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at >
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15593843#comment-15593843 ] Brahma Reddy Battula commented on HDFS-10730: - [~linyiqun] thanks for updating the patch..Latest patch LGTM, will commit today. > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10730.001.patch, HDFS-10730.002.patch > > > In HDFS-10723, [~kihwal] suggested that > {quote} > it is not a good idea to hard-code or reuse the same port number in unit > tests. Because the jenkins slave can run multiple jobs at the same time. > {quote} > Then I collected some tests which failed by this reason in recent jenkin > buildings. > Finally I found these two failed test > {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/) > and > {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/). > The stack infos: > {code} > java.net.BindException: Problem binding to [localhost:57241] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:538) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:811) > at org.apache.hadoop.ipc.Server.(Server.java:2611) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278) > at > org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182) > {code} > {code} > java.net.BindException: Problem binding to [localhost:54191] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:530) > at org.apache.hadoop.ipc.Server.bind(Server.java:519) > at > org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:255) > {code} > We can make a change
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15593496#comment-15593496 ] Yiqun Lin commented on HDFS-10730: -- The jenkins's result seems good, feel free to commit, [~brahmareddy]. Or you want to make other change that you can just let me know, :). > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10730.001.patch, HDFS-10730.002.patch > > > In HDFS-10723, [~kihwal] suggested that > {quote} > it is not a good idea to hard-code or reuse the same port number in unit > tests. Because the jenkins slave can run multiple jobs at the same time. > {quote} > Then I collected some tests which failed by this reason in recent jenkin > buildings. > Finally I found these two failed test > {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/) > and > {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/). > The stack infos: > {code} > java.net.BindException: Problem binding to [localhost:57241] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:538) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:811) > at org.apache.hadoop.ipc.Server.(Server.java:2611) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278) > at > org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182) > {code} > {code} > java.net.BindException: Problem binding to [localhost:54191] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:530) > at org.apache.hadoop.ipc.Server.bind(Server.java:519) > at > org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at >
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590870#comment-15590870 ] Hadoop QA commented on HDFS-10730: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 50s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10730 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12834316/HDFS-10730.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b4a4b6580373 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8650cc8 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17233/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17233/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10730.001.patch, HDFS-10730.002.patch > > > In HDFS-10723, [~kihwal] suggested that > {quote} > it is not a good idea to hard-code or reuse the same port number in unit > tests. Because the jenkins slave can run multiple jobs at the same time. > {quote} > Then I collected some tests
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590826#comment-15590826 ] Hadoop QA commented on HDFS-10730: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10730 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822562/HDFS-10730.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3390c5c44d77 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8650cc8 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17231/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17231/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17231/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type:
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590665#comment-15590665 ] Brahma Reddy Battula commented on HDFS-10730: - {{cluster.restartDataNode(dnprop);}} > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10730.001.patch > > > In HDFS-10723, [~kihwal] suggested that > {quote} > it is not a good idea to hard-code or reuse the same port number in unit > tests. Because the jenkins slave can run multiple jobs at the same time. > {quote} > Then I collected some tests which failed by this reason in recent jenkin > buildings. > Finally I found these two failed test > {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/) > and > {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/). > The stack infos: > {code} > java.net.BindException: Problem binding to [localhost:57241] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:538) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:811) > at org.apache.hadoop.ipc.Server.(Server.java:2611) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278) > at > org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182) > {code} > {code} > java.net.BindException: Problem binding to [localhost:54191] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:530) > at org.apache.hadoop.ipc.Server.bind(Server.java:519) > at > org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:255) > {code} > We can make a change to update the param value for {{keepPort}} from > {code} >
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15412808#comment-15412808 ] Yiqun Lin commented on HDFS-10730: -- The failed test were not related. The test {{hadoop.tracing.TestTracing}} was tracked by HADOOP-13473 and the test {{hadoop.security.TestRefreshUserMappings}} was tracked by HADOOP-13469. Thanks for the review. > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10730.001.patch > > > In HDFS-10723, [~kihwal] suggested that > {quote} > it is not a good idea to hard-code or reuse the same port number in unit > tests. Because the jenkins slave can run multiple jobs at the same time. > {quote} > Then I collected some tests which failed by this reason in recent jenkin > buildings. > Finally I found these two failed test > {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/) > and > {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/). > The stack infos: > {code} > java.net.BindException: Problem binding to [localhost:57241] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:538) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:811) > at org.apache.hadoop.ipc.Server.(Server.java:2611) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278) > at > org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182) > {code} > {code} > java.net.BindException: Problem binding to [localhost:54191] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:530) > at org.apache.hadoop.ipc.Server.bind(Server.java:519) > at > org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259) > at >
[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException
[ https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411810#comment-15411810 ] Hadoop QA commented on HDFS-10730: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 95m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tracing.TestTracing | | | hadoop.security.TestRefreshUserMappings | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822562/HDFS-10730.001.patch | | JIRA Issue | HDFS-10730 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cafe1d0ef3f3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4d3af47 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16335/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16335/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16335/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix some failed tests due to BindException > -- > > Key: HDFS-10730 > URL: https://issues.apache.org/jira/browse/HDFS-10730 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10730.001.patch > > > In