[ 
https://issues.apache.org/jira/browse/HADOOP-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12663442#action_12663442
 ] 

stack commented on HADOOP-4802:
-------------------------------

Removed my patches.  Write me offline if want to know why.

> RPC Server send buffer retains size of largest response ever sent 
> ------------------------------------------------------------------
>
>                 Key: HADOOP-4802
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4802
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 0.18.2, 0.19.0
>            Reporter: stack
>
> The stack-based ByteArrayOutputStream in Server.Hander is reset each time 
> through the run loop.  This will set the BAOS 'size' back to zero but the 
> allocated backing buffer is unaltered.  If during an Handlers' lifecycle, any 
> particular RPC response was fat -- Megabytes, even -- the buffer expands 
> during the write to accommodate the particular response but then never 
> shrinks subsequently.  If a hosting Server has had more than one 'fat 
> payload' occurrence, the resultant occupied heap can provoke memory woes (See 
> https://issues.apache.org/jira/browse/HBASE-900?focusedCommentId=12654009#action_12654009
>  for an extreme example; occasional payloads of 20-50MB with 30 handlers 
> robbed the heap of 700MB).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to