[ 
https://issues.apache.org/jira/browse/HDFS-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556327#comment-14556327
 ] 

Hudson commented on HDFS-8268:
------------------------------

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2151 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2151/])
HDFS-8268. Port conflict log for data node server is not sufficient 
(Contributed by Mohammad Shahid Khan) (vinayakumarb: rev 
0c6638c2ea278bd460df88e7118945e461266a8b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Port conflict log for data node server is not sufficient
> --------------------------------------------------------
>
>                 Key: HDFS-8268
>                 URL: https://issues.apache.org/jira/browse/HDFS-8268
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.7.0, 2.8.0
>         Environment: x86_64 x86_64 x86_64 GNU/Linux
>            Reporter: Mohammad Shahid Khan
>            Assignee: Mohammad Shahid Khan
>            Priority: Minor
>             Fix For: 2.8.0
>
>         Attachments: HDFS-8268.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Data Node Server start up issue due to port conflict.
> The data node server port "dfs.datanode.http.address" conflict is not 
> sufficient to  identify the reason of failure.
> The exception log by the server is as below
> *Actual:*
> 2015-04-27 16:48:53,960 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.net.BindException: Address already in use
>       at sun.nio.ch.Net.bind0(Native Method)
>       at sun.nio.ch.Net.bind(Net.java:437)
>       at sun.nio.ch.Net.bind(Net.java:429)
>       at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>       at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>       at 
> io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
>       at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
>       at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
>       at 
> io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
>       at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
>       at 
> io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
>       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>       at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>       at java.lang.Thread.run(Thread.java:745)
> *_The above log does not contain the information of the conflicting port._*
> *Expected output:*
> java.net.BindException: Problem binding to [0.0.0.0:50075] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>       at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
>       at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>       at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:160)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:795)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1142)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:439)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2420)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2349)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2540)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2564)
> Caused by: java.net.BindException: Address already in use
>       at sun.nio.ch.Net.bind0(Native Method)
>       at sun.nio.ch.Net.bind(Net.java:437)
>       at sun.nio.ch.Net.bind(Net.java:429)
>       at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>       at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>       at 
> io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
>       at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
>       at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
>       at 
> io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
>       at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
>       at 
> io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
>       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>       at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>       at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to