[ 
https://issues.apache.org/jira/browse/HDFS-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17832283#comment-17832283
 ] 

ASF GitHub Bot commented on HDFS-17449:
---------------------------------------

teamconfx opened a new pull request, #6691:
URL: https://github.com/apache/hadoop/pull/6691

   ### Description of PR
   This PR catches the IndexOutOfBounds encountered during parsing of 
decommission host name and port string, and throws a more informative 
IllegalArgument instead.
   
   ### How was this patch tested?
   Running 
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart
 with namenode host provider set to 
org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager now 
throws IllegalArgument.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Ill-formed decommission host name and port pair would trigger IndexOutOfBound 
> error
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-17449
>                 URL: https://issues.apache.org/jira/browse/HDFS-17449
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: ConfX
>            Priority: Major
>
> h2. What happened:
> Got IndexOutOfBound when trying to run 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart
>  with namenode host provider set to 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.
> h2. Buggy code:
> In HostsFileWriter.java:
> {code:java}
> String[] hostAndPort = hostNameAndPort.split(":"); // hostNameAndPort might 
> be invalid
> dn.setHostName(hostAndPort[0]);
> dn.setPort(Integer.parseInt(hostAndPort[1])); // here IndexOutOfBound might 
> be thrown
> dn.setAdminState(AdminStates.DECOMMISSIONED);{code}
> h2. StackTrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
>     at 
> org.apache.hadoop.hdfs.util.HostsFileWriter.initOutOfServiceHosts(HostsFileWriter.java:110){code}
> h2. How to reproduce:
> (1) Set {{dfs.namenode.hosts.provider.classname}} to 
> {{org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager}}
> (2) Run test: 
> {{org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to