[ 
https://issues.apache.org/jira/browse/KAFKA-204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13155740#comment-13155740
 ] 

Jay Kreps commented on KAFKA-204:
---------------------------------

We have request limits in the server and consumer that should provide 
protection against this so i think that is the appropriate way to handle it. If 
it does happen I think we should just break everything and then the person 
running things should set the configs correctly to limit the max request size 
the server will accept and the max fetch size for the client.
                
> BoundedByteBufferReceive hides OutOfMemoryError
> -----------------------------------------------
>
>                 Key: KAFKA-204
>                 URL: https://issues.apache.org/jira/browse/KAFKA-204
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.7
>            Reporter: Chris Burroughs
>            Assignee: Chris Burroughs
>            Priority: Critical
>         Attachments: k204-v1.txt
>
>
>   private def byteBufferAllocate(size: Int): ByteBuffer = {
>     var buffer: ByteBuffer = null
>     try {
>       buffer = ByteBuffer.allocate(size)
>     }
>     catch {
>       case e: OutOfMemoryError =>
>         throw new RuntimeException("OOME with size " + size, e)
>       case e2 =>
>         throw e2
>     }
>     buffer
>   }
> This hides the fact that an Error occurred, and will likely result in some 
> log handler printing a message, instead of exiting with non-zero status.  
> Knowing how large the allocation was that caused an OOM is really nice, so 
> I'd suggest logging in byteBufferAllocate and then re-throwing 
> OutOfMemoryError

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to