[jira] [Resolved] (HADOOP-18001) Update to Jetty 9.4.44

2021-12-08 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-18001.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Update to Jetty 9.4.44
> --
>
> Key: HADOOP-18001
> URL: https://issues.apache.org/jira/browse/HADOOP-18001
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuan Luo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-18001.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-3.2+JDK8 on Linux/x86_64

2021-12-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/

[Dec 3, 2021 2:43:37 PM] (Akira Ajisaka) HDFS-16332. Handle invalid token 
exception in sasl handshake (#3677)
[Dec 6, 2021 5:46:34 AM] (Takanobu Asanuma) YARN-10820. Make 
GetClusterNodesRequestPBImpl thread safe. Contributed by Swathi Chandrashekar.
[Dec 6, 2021 7:19:05 AM] (Takanobu Asanuma) HDFS-16268. Balancer stuck when 
moving striped blocks due to NPE (#3546)
[Dec 6, 2021 11:15:11 AM] (Akira Ajisaka) YARN-9063. ATS 1.5 fails to start if 
RollingLevelDb files are corrupt or missing (#3728)




-1 overall


The following subsystems voted -1:
asflicense blanks hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

Failed junit tests :

   hadoop.hdfs.TestBlockStoragePolicy 
   hadoop.hdfs.TestReconstructStripedFileWithValidator 
   hadoop.mapred.uploader.TestFrameworkUploader 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-compile-cc-root.txt
 [48K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-compile-javac-root.txt
 [332K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-checkstyle-root.txt
 [14M]

   hadolint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-hadolint.txt
 [8.0K]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-pylint.txt
 [148K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-shellcheck.txt
 [20K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/xml.txt
 [16K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-javadoc-javadoc-root.txt
 [1.7M]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [528K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 [96K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
 [16K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 [20K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 [16K]

   asflicense:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/24/artifact/out/results-asflicense.txt
 [4.0K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org


[jira] [Created] (HADOOP-18040) Use maven.test.failure.ignore instead of ignoreTestFailure

2021-12-08 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18040:
--

 Summary: Use maven.test.failure.ignore instead of ignoreTestFailure
 Key: HADOOP-18040
 URL: https://issues.apache.org/jira/browse/HADOOP-18040
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


In HADOOP-16596, "ignoreTestFailure" variable was introduced to ignore unit 
test failure, however, Maven property "maven.test.failure.ignore" can be used 
instead and it can simplify the pom.xml.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18034) Bump mina-core from 2.0.16 to 2.1.5 in /hadoop-project

2021-12-08 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-18034.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Bump mina-core from 2.0.16 to 2.1.5 in /hadoop-project 
> ---
>
> Key: HADOOP-18034
> URL: https://issues.apache.org/jira/browse/HADOOP-18034
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Raised from github depandot 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18039) Upgrade hbase2 version and fix TestTimelineWriterHBaseDown

2021-12-08 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18039:
-

 Summary: Upgrade hbase2 version and fix TestTimelineWriterHBaseDown
 Key: HADOOP-18039
 URL: https://issues.apache.org/jira/browse/HADOOP-18039
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


As mentioned on the parent Jira, we can't upgrade hbase2 profile version beyond 
2.2.4 until we either have hbase 2 artifacts available that are built with 
hadoop 3 profile by default or hbase 3 is rolled out (hbase 3 is compatible 
with hadoop 3 versions only).

Let's upgrade hbase2 profile version to 2.2.4 as part of this Jira and also fix 
TestTimelineWriterHBaseDown to create connection only after mini cluster is up.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-12-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/712/

[Dec 7, 2021 7:42:57 AM] (Szilard Nemeth) YARN-11014. YARN incorrectly 
validates maximum capacity resources on the validation API. Contributed by 
Benjamin Teke
[Dec 7, 2021 7:51:03 AM] (Szilard Nemeth) YARN-11020. [UI2] No container is 
found for an application attempt with a single AM container. Contributed by 
Andras Gyori
[Dec 7, 2021 8:49:27 AM] (noreply) HDFS-16351. Add path exception information 
in FSNamesystem (#3713). Contributed by guophilipse.
[Dec 7, 2021 12:39:04 PM] (noreply) HDFS-16354. Add description of 
GETSNAPSHOTDIFFLISTING to WebHDFS doc. (#3740)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.tools.TestDistCpWithRawXAttrs 
   hadoop.tools.TestDistCpWithXAttrs 
   hadoop.tools.contract.TestLocalContractDistCp 
   hadoop.tools.TestDistCpSync 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestExternalCall 
   hadoop.tools.contract.TestHDFSContractDistCp 
   hadoop.tools.TestDistCpViewFs 
   hadoop.tools.TestDistCpWithAcls 
   hadoop.yarn.service.client.TestSystemServiceManagerImpl 
   hadoop.yarn.service.client.TestApiServiceClient 
   hadoop.yarn.service.TestCleanupAfterKill 
   hadoop.yarn.service.TestApiServer 
   hadoop.yarn.csi.client.TestCsiClient 
   hadoop.mapred.nativetask.kvtest.KVTest 
   hadoop.mapred.nativetask.kvtest.LargeKVTest 
   hadoop.mapred.nativetask.compresstest.CompressTest 
   hadoop.mapred.nativetask.combinertest.OldAPICombinerTest 
   hadoop.mapred.nativetask.nonsorttest.NonSortTest 
   hadoop.mapred.nativetask.combinertest.CombinerTest 
   hadoop.mapred.nativetask.combinertest.LargeKVCombinerTest 
   hadoop.streaming.TestStreaming 
   hadoop.streaming.TestStreamingStderr 
   hadoop.streaming.TestStreamAggregate 
   hadoop.streaming.mapreduce.TestStreamXmlRecordReader 
   hadoop.streaming.TestStreamingOutputKeyValueTypes 
   hadoop.streaming.TestStreamXmlMultipleRecords 
   hadoop.streaming.TestMultipleArchiveFiles 
   hadoop.streaming.TestStreamingCounters 
   hadoop.streaming.TestStreamingBackground 
   hadoop.streaming.TestStreamingKeyValue 
   hadoop.streaming.TestStreamReduceNone 
   hadoop.streaming.TestFileArgs 
   hadoop.streaming.TestTypedBytesStreaming 
   hadoop.streaming.TestStreamingCombiner 
   hadoop.streaming.TestGzipInput 
   hadoop.streaming.TestStreamDataProtocol 
   hadoop.streaming.TestRawBytesStreaming 
   hadoop.streaming.TestStreamingExitStatus 
   hadoop.streaming.TestStreamingSeparator 
   hadoop.streaming.TestStreamXmlRecordReader 
   hadoop.streaming.TestSymLink 
   hadoop.streaming.TestUnconsumedInput 
   hadoop.streaming.TestStreamingFailure 
   hadoop.streaming.TestStreamingBadRecords 
   hadoop.streaming.TestMultipleCachefiles 
   hadoop.streaming.TestStreamingOutputOnlyKeys 
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.blockgenerator.TestBlockGen 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.blockgenerator.TestBlockGen 
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator 
   hadoop.tools.TestHadoopArchives 
   hadoop.tools.TestHadoopArchiveLogsRunner 
   hadoop.contrib.utils.join.TestDataJoin 
   hadoop.tools.TestDistCh 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/712/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-12-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.timelineservice.reader.TestTimelineReaderServer 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/diff-compile-javac-root.txt
  [500K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-mvnsite-root.txt
  [1.2M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-javadoc-root.txt
  [32K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [232K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [428K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [128K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/505/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   

[jira] [Resolved] (HADOOP-18024) SocketChannel is not closed when IOException happens in Server$Listener.doAccept

2021-12-08 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HADOOP-18024.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> SocketChannel is not closed when IOException happens in 
> Server$Listener.doAccept
> 
>
> Key: HADOOP-18024
> URL: https://issues.apache.org/jira/browse/HADOOP-18024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.2.2
>Reporter: Haoze Wu
>Assignee: Haoze Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This is a follow-up of HADOOP-17552.
> When the symptom described in HADOOP-17552 happens, the client may time out 
> in 2min, according to the default RPC timeout configuration specified in 
> HADOOP-17552. Before this timeout, the client just waits, and does not know 
> this issue happens.
> However, we recently found that actually the client doesn’t need to waste 
> this 2min, and the server’s availability can be also improved. If the 
> IOException happens in line 1402 or 1403 or 1404, we can just close this 
> problematic `SocketChannel` and continue to accept new socket connections. 
> The client side can also be aware of the close socket immediately, instead of 
> waiting 2min.
> The old implementation:
> {code:java}
> //hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
>    public void run() {
>       while (running) {
>         // ...
>         try {
>           // ...
>           while (iter.hasNext()) {
>             // ...
>             try {
>               if (key.isValid()) {
>                 if (key.isAcceptable())
>                   doAccept(key);                              // line 1348
>               }
>             } catch (IOException e) {                         // line 1350
>             }
>             // ...
>           }
>         } catch (OutOfMemoryError e) {
>           // ...
>         } catch (Exception e) {
>           // ...
>         }
>       }
>     } {code}
> {code:java}
> //hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
>     void doAccept(SelectionKey key) throws InterruptedException, IOException, 
>         OutOfMemoryError {
>       ServerSocketChannel server = (ServerSocketChannel) key.channel();
>       SocketChannel channel;
>       while ((channel = server.accept()) != null) {           // line 1400
>         channel.configureBlocking(false);                     // line 1402
>         channel.socket().setTcpNoDelay(tcpNoDelay);           // line 1403
>         channel.socket().setKeepAlive(true);                  // line 1404
>         Reader reader = getReader();
>         Connection c = connectionManager.register(channel,
>             this.listenPort, this.isOnAuxiliaryPort);
>         // If the connectionManager can't take it, close the connection.
>         if (c == null) {
>           if (channel.isOpen()) {
>             IOUtils.cleanup(null, channel);
>           }
>           connectionManager.droppedConnections.getAndIncrement();
>           continue;
>         }
>         key.attach(c);  // so closeCurrentConnection can get the object
>         reader.addConnection(c);
>       }
>     } {code}
>  
> We propose that the following implementation is better:
> {code:java}
>     void doAccept(SelectionKey key) throws InterruptedException, IOException, 
>         OutOfMemoryError {
>       ServerSocketChannel server = (ServerSocketChannel) key.channel();
>       SocketChannel channel;
>       while ((channel = server.accept()) != null) {           // line 1400
>         try {
>           channel.configureBlocking(false);                   // line 1402
>           channel.socket().setTcpNoDelay(tcpNoDelay);         // line 1403
>           channel.socket().setKeepAlive(true);                // line 1404
>         } catch (IOException e) {
>           LOG.warn(...);
>           try {
>             channel.socket().close();
>             channel.close();
>           } catch (IOException ignored) { }
>           continue;
>         }
>         // ...
>       }
>     }{code}
> The advantages include:
>  # {*}In the old implementation{*}, the `ServerSocketChannel` was abandoned 
> due to the single exception in this single `SocketChannel`, because the 
> exception handler is in line 1350. {*}In the new implementation{*}, we use a 
> try-catch to handle the exception in line 1402 or 1403 or 1404, then the 
> `ServerSocketChannel` can continue to accept new connections, and don’t need 
> to go back to the line 1348 in the next while loop in the run method.
>  # {*}In the old