[ https://issues.apache.org/jira/browse/KAFKA-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17770509#comment-17770509 ]
Alexander Grzesik commented on KAFKA-6020: ------------------------------------------ It would definitly help for several of our use cases if a consumer could define based on some header filters which messages it consumes. Currently we either use consumer side filtering with the increased bandwith needs and also some security concerns or use the streaming API to split general topics in more specific ones but that increases for some cases drastically the amount of topics. We would be very happy if the proposed filter feature could become part of the Kafka core. > Broker side filtering > --------------------- > > Key: KAFKA-6020 > URL: https://issues.apache.org/jira/browse/KAFKA-6020 > Project: Kafka > Issue Type: New Feature > Components: consumer > Reporter: Pavel Micka > Priority: Major > Labels: needs-kip > > Currently, it is not possible to filter messages on broker side. Filtering > messages on broker side is convenient for filter with very low selectivity > (one message in few thousands). In my case it means to transfer several GB of > data to consumer, throw it away, take one message and do it again... > While I understand that filtering by message body is not feasible (for > performance reasons), I propose to filter just by message key prefix. This > can be achieved even without any deserialization, as the prefix to be matched > can be passed as an array (hence the broker would do just array prefix > compare). -- This message was sent by Atlassian Jira (v8.20.10#820010)