Memory Leaks with SocketChannel.write
-------------------------------------
Key: DIRMINA-391
URL: https://issues.apache.org/jira/browse/DIRMINA-391
Project: MINA
Issue Type: Bug
Components: Transport
Affects Versions: 1.0.4, 1.1.1, 2.0.0-M1
Environment: All versions of JDK
Reporter: Kenji Hollis
There is a known issue with Java when using the "SocketChannel.write" function
with a standard ByteBuffer. Memory gets leaked slowly as the system uses a
DirectByteBuffer for each write that occurs in the system. This may not cause
an issue with systems that are under a small load, but systems under heavy load
will notice a memory issue, and will eventually crash with little or no notice.
The way to get around this is to allocate a ByteBuffer that is allocated using
"allocateDirect" first, then writing to that buffer, and clearing it out when
required. This way, you only have a single ByteBuffer that is being written
to, and written from. When the JDK sees that you are writing data from a
DirectByteBuffer, it does not allocate its own DirectByteBuffer - it outputs
the data from yours. This will get around the memory allocation issue.
So, to summarize, here's what needs to be done:
On a write with a ByteBuffer that uses allocateDirect, this is what worked for
us:
--- Cut code ---
private final ByteBuffer directOutputBuffer = ByteBuffer.allocateDirect(40960);
public int write(ByteBuffer buf) {
directOutputBuffer.clear();
directOutputBuffer.put( buf.array() );
directOutputBuffer.flip();
return socket.write( directOutputBuffer );
}
--- End cut ---
This will always use the allocated stack memory of 40K (which can be lower,
depending on requirements of the OS based on the max output buffer size). If
you simply say "socket.write( buf )", you will notice after a short amount of
time, memory usage will increase, and you will eventually get a Java exception
that shows "out of memory".
This is critical enough that it should be fixed, but as I am not a contributing
member (yet), I am not sure about how to tackle this. I believe using a pool
of DirectByteBuffer objects will fix the issue, but keep in mind that if a pool
is created for these, since the pool is a permanent allocation of memory, you
-cannot- shrink the size of the allocated pool: you must use a static size.
The problem here will be determining which pool is in use.
I can throw together a test case for you guys if you want to show this issue if
you would like. Please see:
http://www.velocityreviews.com/forums/t147788-nio-outofmemoryexception.html for
more information on the subject. We had this issue, and have seen a fix with
using this suggestion.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.