[ 
https://issues.apache.org/jira/browse/HADOOP-10081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13813400#comment-13813400
 ] 

Jason Lowe commented on HADOOP-10081:
-------------------------------------

The close method closes this.in and this.out but does not close this.socket 
which might have helped in these scenarios.  One mechanism that can trigger 
this having a client trying to connect to HDFS on the wrong port (e.g.: web 
port instead of RPC port).  Doing so makes the SASL setup blow up with an OOM 
error since it's not checking for sane boundaries on string lengths and trying 
to allocate gigantic byte arrays.  Sample backtrace snippet from a 0.23 client:

{noformat}
Caused by: java.io.IOException: Couldn't set up IO streams
        at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:623)
        at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:207)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1204)
        at org.apache.hadoop.ipc.Client.call(Client.java:1074)
        ... 25 more
Caused by: java.lang.OutOfMemoryError
        at sun.misc.Unsafe.allocateMemory(Native Method)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:127)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
        at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174)
        at sun.nio.ch.IOUtil.read(IOUtil.java:196)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
        at 
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:54)
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:154)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:127)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
        at java.io.DataInputStream.readFully(DataInputStream.java:195)
        at java.io.DataInputStream.readFully(DataInputStream.java:169)
        at org.apache.hadoop.io.WritableUtils.readString(WritableUtils.java:125)
        at 
org.apache.hadoop.security.SaslRpcClient.readStatus(SaslRpcClient.java:114)
        at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:150)
        at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:409)
        at org.apache.hadoop.ipc.Client$Connection.access$1300(Client.java:207)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:578)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:575)
        at java.security.AccessController.doPrivileged(Native Method)
{noformat}

There's probably a separate JIRA here for the fact that the SASL layer isn't 
doing sanity checks on the lengths of strings it's trying to read.

> Client.setupIOStreams can leak socket resources on exception or error
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-10081
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10081
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 0.23.9, 2.2.0
>            Reporter: Jason Lowe
>            Priority: Critical
>
> The setupIOStreams method in org.apache.hadoop.ipc.Client can leak socket 
> resources if an exception is thrown before the inStream and outStream local 
> variables are assigned to this.in and this.out, respectively.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to