[
https://issues.apache.org/jira/browse/HADOOP-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12655913#action_12655913
]
Edward J. Yoon commented on HADOOP-4802:
----------------------------------------
> stack - 09/Dec/08 12:32 PM
> Patch applies to 0.19 and to TRUNK.
I hope it'll be fixed at the 0.18.3 release.
> RPC Server send buffer retains size of largest response ever sent
> ------------------------------------------------------------------
>
> Key: HADOOP-4802
> URL: https://issues.apache.org/jira/browse/HADOOP-4802
> Project: Hadoop Core
> Issue Type: Bug
> Components: ipc
> Affects Versions: 0.18.2, 0.19.0
> Reporter: stack
> Attachments: 4802-v2.patch, 4802-v3.patch, 4802-v4-TRUNK.patch,
> 4802.patch
>
>
> The stack-based ByteArrayOutputStream in Server.Hander is reset each time
> through the run loop. This will set the BAOS 'size' back to zero but the
> allocated backing buffer is unaltered. If during an Handlers' lifecycle, any
> particular RPC response was fat -- Megabytes, even -- the buffer expands
> during the write to accommodate the particular response but then never
> shrinks subsequently. If a hosting Server has had more than one 'fat
> payload' occurrence, the resultant occupied heap can provoke memory woes (See
> https://issues.apache.org/jira/browse/HBASE-900?focusedCommentId=12654009#action_12654009
> for an extreme example; occasional payloads of 20-50MB with 30 handlers
> robbed the heap of 700MB).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.