[ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
-------------------------------
    Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.

  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.
{code:keyword}
private void freeDirectBuffer() {
  try {
    DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1);
    buffer.cleaner().clean();
  } catch (Throwable t) {
    LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.

  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.
{code:java}
private void freeDirectBuffer() {
  try {
    ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
    int i = 0;
    while (buffer.capacity() != 1 && i < 1024) {
      ((DirectBuffer) buffer).cleaner().clean();
      buffer = Util.getTemporaryDirectBuffer(1);
      i++;
    }
    ((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
    LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -------------------------------------------------------------------------
>
>                 Key: HADOOP-18534
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18534
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: rpc-server
>            Reporter: xinqiu.hu
>            Priority: Minor
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> {code:keyword}
> private void freeDirectBuffer() {
>   try {
>     DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1);
>     buffer.cleaner().clean();
>   } catch (Throwable t) {
>     LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to