[ https://issues.apache.org/jira/browse/KAFKA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968265#comment-15968265 ]
Ismael Juma commented on KAFKA-5062: ------------------------------------ James, as stated previously, we check if the size is above the config value (100 MB by default), and disconnect if it is. > Kafka brokers can accept malformed requests which allocate gigabytes of memory > ------------------------------------------------------------------------------ > > Key: KAFKA-5062 > URL: https://issues.apache.org/jira/browse/KAFKA-5062 > Project: Kafka > Issue Type: Bug > Reporter: Apurva Mehta > > In some circumstances, it is possible to cause a Kafka broker to allocate > massive amounts of memory by writing malformed bytes to the brokers port. > In investigating an issue, we saw byte arrays on the kafka heap upto 1.8 > gigabytes, the first 360 bytes of which were non kafka requests -- an > application was writing the wrong data to kafka, causing the broker to > interpret the request size as 1.8GB and then allocate that amount. Apart from > the first 360 bytes, the rest of the 1.8GB byte array was null. > We have a socket.request.max.bytes set at 100MB to protect against this kind > of thing, but somehow that limit is not always respected. We need to > investigate why and fix it. > cc [~rnpridgeon], [~ijuma], [~gwenshap], [~cmccabe] -- This message was sent by Atlassian JIRA (v6.3.15#6346)