[jira] [Commented] (KAFKA-13894) Extend Kafka kerberos auth support to beyond only hostname

2022-05-11 Thread Yiming Zang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535126#comment-17535126
 ] 

Yiming Zang commented on KAFKA-13894:
-

We have internal changes that already fixed this issue, would like to merge 
back to open source community! Could anyway take a look at this proposal and 
see if that make sense?

> Extend Kafka kerberos auth support to beyond only hostname
> --
>
> Key: KAFKA-13894
> URL: https://issues.apache.org/jira/browse/KAFKA-13894
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Yiming Zang
>Priority: Critical
>
> {*}Problem{*}:
> Currently Kafka client only support using the Kafka broker hostname in the 
> kerberos authentication process ([Source 
> Code|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L231]).
> However, not all companies support per-host based keytabs. It is a common 
> practice that a keytabs which contains a shared identity name is used 
> instead. To support this kind of Kerberos set ups, we need to make some 
> changes to make Kafka support a customized service name apart from just using 
> the hostname for authentication.
> {*}Proposal{*}:
> To address this issue, we propose to add an extra client side configuration 
> for Kerberos authentication. If user provide that configuration, we will use 
> whatever is provided to replace the hostname, otherwise we will default back 
> to use hostnames. Here's an example:
>  
> {code:java}
> String kerberosServiceNameFromConfig = 
> (String)configs.get(SaslConfigs.SASL_KERBEROS_SERVICE_NAME);
> String hostnameOrServiceName = (kerberosServiceNameFromConfig == null || 
> kerberosServiceNameFromConfig.trim().isEmpty()) ? 
> socket.getInetAddress().getHostName() : kerberosServiceNameFromConfig;
> authenticatorCreator = () -> buildClientAuthenticator(configs,
>   saslCallbackHandlers.get(clientSaslMechanism),
>   id,
>   hostnameOrServiceName,
>   loginManager.serviceName(),
>   transportLayer,
>   subjects.get(clientSaslMechanism));{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (KAFKA-13894) Extend Kafka kerberos auth support to beyond only hostname

2022-05-11 Thread Yiming Zang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiming Zang updated KAFKA-13894:

Priority: Critical  (was: Major)

> Extend Kafka kerberos auth support to beyond only hostname
> --
>
> Key: KAFKA-13894
> URL: https://issues.apache.org/jira/browse/KAFKA-13894
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Yiming Zang
>Priority: Critical
>
> {*}Problem{*}:
> Currently Kafka client only support using the Kafka broker hostname in the 
> kerberos authentication process ([Source 
> Code|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L231]).
> However, not all companies support per-host based keytabs. It is a common 
> practice that a keytabs which contains a shared identity name is used 
> instead. To support this kind of Kerberos set ups, we need to make some 
> changes to make Kafka support a customized service name apart from just using 
> the hostname for authentication.
> {*}Proposal{*}:
> To address this issue, we propose to add an extra client side configuration 
> for Kerberos authentication. If user provide that configuration, we will use 
> whatever is provided to replace the hostname, otherwise we will default back 
> to use hostnames. Here's an example:
>  
> {code:java}
> String kerberosServiceNameFromConfig = 
> (String)configs.get(SaslConfigs.SASL_KERBEROS_SERVICE_NAME);
> String hostnameOrServiceName = (kerberosServiceNameFromConfig == null || 
> kerberosServiceNameFromConfig.trim().isEmpty()) ? 
> socket.getInetAddress().getHostName() : kerberosServiceNameFromConfig;
> authenticatorCreator = () -> buildClientAuthenticator(configs,
>   saslCallbackHandlers.get(clientSaslMechanism),
>   id,
>   hostnameOrServiceName,
>   loginManager.serviceName(),
>   transportLayer,
>   subjects.get(clientSaslMechanism));{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (KAFKA-13894) Extend Kafka kerberos auth support to beyond only hostname

2022-05-11 Thread Yiming Zang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiming Zang updated KAFKA-13894:

Component/s: clients

> Extend Kafka kerberos auth support to beyond only hostname
> --
>
> Key: KAFKA-13894
> URL: https://issues.apache.org/jira/browse/KAFKA-13894
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Yiming Zang
>Priority: Major
>
> {*}Problem{*}:
> Currently Kafka client only support using the Kafka broker hostname in the 
> kerberos authentication process ([Source 
> Code|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L231]).
> However, not all companies support per-host based keytabs. It is a common 
> practice that a keytabs which contains a shared identity name is used 
> instead. To support this kind of Kerberos set ups, we need to make some 
> changes to make Kafka support a customized service name apart from just using 
> the hostname for authentication.
> {*}Proposal{*}:
> To address this issue, we propose to add an extra client side configuration 
> for Kerberos authentication. If user provide that configuration, we will use 
> whatever is provided to replace the hostname, otherwise we will default back 
> to use hostnames. Here's an example:
>  
> {code:java}
> String kerberosServiceNameFromConfig = 
> (String)configs.get(SaslConfigs.SASL_KERBEROS_SERVICE_NAME);
> String hostnameOrServiceName = (kerberosServiceNameFromConfig == null || 
> kerberosServiceNameFromConfig.trim().isEmpty()) ? 
> socket.getInetAddress().getHostName() : kerberosServiceNameFromConfig;
> authenticatorCreator = () -> buildClientAuthenticator(configs,
>   saslCallbackHandlers.get(clientSaslMechanism),
>   id,
>   hostnameOrServiceName,
>   loginManager.serviceName(),
>   transportLayer,
>   subjects.get(clientSaslMechanism));{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13894) Extend Kafka kerberos auth support to beyond only hostname

2022-05-11 Thread Yiming Zang (Jira)
Yiming Zang created KAFKA-13894:
---

 Summary: Extend Kafka kerberos auth support to beyond only hostname
 Key: KAFKA-13894
 URL: https://issues.apache.org/jira/browse/KAFKA-13894
 Project: Kafka
  Issue Type: Improvement
Reporter: Yiming Zang


{*}Problem{*}:

Currently Kafka client only support using the Kafka broker hostname in the 
kerberos authentication process ([Source 
Code|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L231]).

However, not all companies support per-host based keytabs. It is a common 
practice that a keytabs which contains a shared identity name is used instead. 
To support this kind of Kerberos set ups, we need to make some changes to make 
Kafka support a customized service name apart from just using the hostname for 
authentication.

{*}Proposal{*}:

To address this issue, we propose to add an extra client side configuration for 
Kerberos authentication. If user provide that configuration, we will use 
whatever is provided to replace the hostname, otherwise we will default back to 
use hostnames. Here's an example:

 
{code:java}
String kerberosServiceNameFromConfig = 
(String)configs.get(SaslConfigs.SASL_KERBEROS_SERVICE_NAME);

String hostnameOrServiceName = (kerberosServiceNameFromConfig == null || 
kerberosServiceNameFromConfig.trim().isEmpty()) ? 
socket.getInetAddress().getHostName() : kerberosServiceNameFromConfig;

authenticatorCreator = () -> buildClientAuthenticator(configs,
  saslCallbackHandlers.get(clientSaslMechanism),
  id,
  hostnameOrServiceName,
  loginManager.serviceName(),
  transportLayer,
  subjects.get(clientSaslMechanism));{code}
 

 

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (KAFKA-13418) Brokers disconnect intermittently with TLS1.3

2022-03-31 Thread Yiming Zang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17515438#comment-17515438
 ] 

Yiming Zang commented on KAFKA-13418:
-

Thanks a lot! [~ijuma]  [~skokoori] 

> Brokers disconnect intermittently with TLS1.3
> -
>
> Key: KAFKA-13418
> URL: https://issues.apache.org/jira/browse/KAFKA-13418
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.8.0
>Reporter: shylaja kokoori
>Assignee: shylaja kokoori
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.2
>
> Attachments: tls1_3.patch
>
>
> Using TLS1.3 (with JDK11) is causing a regression and an increase in 
> inter-broker p99 latency, as mentioned by Yiming in 
> [Kafka-9320|https://issues.apache.org/jira/browse/KAFKA-9320?focusedCommentId=17401818&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17401818].
>  We tested this with Kafka 2.8.
> The issue seems to be because of a renegotiation exception being thrown by 
> {code:java}
> read(ByteBuffer dst)
> {code}
>  & 
> {code:java}
> write(ByteBuffer src)
> {code}
>  in 
> _clients/src/main/java/org/apache/kafka/common/network/SslTransportLayer.java_
> This exception is causing the connection to close between the brokers before 
> read/write is completed. In our internal experiments we have seen the p99 
> latency stabilize when we remove this exception.
> Given that TLS1.3 does not support renegotiation, I would like to make it 
> applicable just for TLS1.2.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-13418) Brokers disconnect intermittently with TLS1.3

2022-02-28 Thread Yiming Zang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17499167#comment-17499167
 ] 

Yiming Zang commented on KAFKA-13418:
-

Thanks [~skokoori] for creating this issue, and [~ijuma] for reviewing the pull 
request. This will solve the TLS issue Twitter has been seeing since upgrading 
to 2.7 with TLS 1.3

> Brokers disconnect intermittently with TLS1.3
> -
>
> Key: KAFKA-13418
> URL: https://issues.apache.org/jira/browse/KAFKA-13418
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.8.0
>Reporter: shylaja kokoori
>Assignee: shylaja kokoori
>Priority: Minor
> Attachments: tls1_3.patch
>
>
> Using TLS1.3 (with JDK11) is causing a regression and an increase in 
> inter-broker p99 latency, as mentioned by Yiming in 
> [Kafka-9320|https://issues.apache.org/jira/browse/KAFKA-9320?focusedCommentId=17401818&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17401818].
>  We tested this with Kafka 2.8.
> The issue seems to be because of a renegotiation exception being thrown by 
> {code:java}
> read(ByteBuffer dst)
> {code}
>  & 
> {code:java}
> write(ByteBuffer src)
> {code}
>  in 
> _clients/src/main/java/org/apache/kafka/common/network/SslTransportLayer.java_
> This exception is causing the connection to close between the brokers before 
> read/write is completed. In our internal experiments we have seen the p99 
> latency stabilize when we remove this exception.
> Given that TLS1.3 does not support renegotiation, I would like to make it 
> applicable just for TLS1.2.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2022-01-27 Thread Yiming Zang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17483450#comment-17483450
 ] 

Yiming Zang commented on KAFKA-2729:


We are still seeing this issue for 2.7.0, I'm not sure if this issue is 
resolved or not. When this happens, we got partitions under minISR and produce 
requests starts to fail. This is often triggered when a single broker was 
restarted. 

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.10.1.0, 0.11.0.0, 2.4.1
>Reporter: Danil Serdyuchenko
>Assignee: Onur Karaman
>Priority: Critical
> Fix For: 1.1.0
>
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (KAFKA-9320) Enable TLSv1.3 by default and disable some of the older protocols

2021-08-19 Thread Yiming Zang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17401818#comment-17401818
 ] 

Yiming Zang edited comment on KAFKA-9320 at 8/19/21, 6:45 PM:
--

We have seen some regression after enabling and upgraded to TLS1.3 with Kafka 
version of 2.7.0, we have been seeing very frequent EOFException and 
disconnection:
{code:java}
[2021-08-13 06:07:26,069] WARN [ReplicaFetcher replicaId=18, leaderId=20, 
fetcherId=0] Unexpected error from atla-alo-26-sr1.prod.twttr.net/10.41.44.125; 
closing connection (org.apache.kafka.common.network.Selector)
 java.io.EOFException: EOF during read
 at 
org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:627)
 at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:466)
 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:416)
 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:729)
 at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:620)
 at org.apache.kafka.common.network.Selector.poll(Selector.java:520)
 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:562)
 at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:96)
 at 
kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:110)
 at 
kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:211)
 at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:310)
 at 
kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:143)
 at 
kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:142)
 at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:122)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96){code}

 We have to rollback to use TLS1.2 and that solves the EOFException issue


was (Author: yzang):
We have seen some regression after enabling and upgraded to TLS1.3 with Kafka 
version of 2.7.0, we have been seeing very frequent EOFException and 
disconnection:
[2021-08-13 06:07:26,069] WARN [ReplicaFetcher replicaId=18, leaderId=20, 
fetcherId=0] Unexpected error from atla-alo-26-sr1.prod.twttr.net/10.41.44.125; 
closing connection (org.apache.kafka.common.network.Selector)
java.io.EOFException: EOF during read
at 
org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:627)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
at 
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:466)
at 
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:416)
at 
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:729)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:620)
at org.apache.kafka.common.network.Selector.poll(Selector.java:520)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:562)
at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:96)
at 
kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:110)
at 
kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:211)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:310)
at 
kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:143)
at 
kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:142)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:122)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
We have to rollback to use TLS1.2 and that solves the EOFException issue

> Enable TLSv1.3 by default and disable some of the older protocols
> -
>
> Key: KAFKA-9320
> URL: https://issues.apache.org/jira/browse/KAFKA-9320
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.6.0
>
> Attachments: report.txt
>
>
> KAFKA-7251 added support for TLSv1.3. We should include this in the list of 
> protocols that are enabled by default. We should also disable some of the 
> older protocols that are not secure. This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9320) Enable TLSv1.3 by default and disable some of the older protocols

2021-08-19 Thread Yiming Zang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17401818#comment-17401818
 ] 

Yiming Zang commented on KAFKA-9320:


We have seen some regression after enabling and upgraded to TLS1.3 with Kafka 
version of 2.7.0, we have been seeing very frequent EOFException and 
disconnection:
[2021-08-13 06:07:26,069] WARN [ReplicaFetcher replicaId=18, leaderId=20, 
fetcherId=0] Unexpected error from atla-alo-26-sr1.prod.twttr.net/10.41.44.125; 
closing connection (org.apache.kafka.common.network.Selector)
java.io.EOFException: EOF during read
at 
org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:627)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
at 
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:466)
at 
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:416)
at 
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:729)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:620)
at org.apache.kafka.common.network.Selector.poll(Selector.java:520)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:562)
at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:96)
at 
kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:110)
at 
kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:211)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:310)
at 
kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:143)
at 
kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:142)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:122)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
We have to rollback to use TLS1.2 and that solves the EOFException issue

> Enable TLSv1.3 by default and disable some of the older protocols
> -
>
> Key: KAFKA-9320
> URL: https://issues.apache.org/jira/browse/KAFKA-9320
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.6.0
>
> Attachments: report.txt
>
>
> KAFKA-7251 added support for TLSv1.3. We should include this in the list of 
> protocols that are enabled by default. We should also disable some of the 
> older protocols that are not secure. This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6020) Broker side filtering

2019-03-06 Thread Yiming Zang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786242#comment-16786242
 ] 

Yiming Zang commented on KAFKA-6020:


Any updates for this?

We have smilier needs on our side, strongly support this idea on broker-side 
filtering. 

Our use case comes from N-DC replication. Basically imagine if you have 5 data 
centers and you need to replicate data to everywhere, typically you'll have to 
run N*(N-1) which is 20 mirror-maker jobs in order replicate messages in each 
local data center to all remote data centers. Each mirror maker will have to 
read the whole 5 copies of events, do some processing and only replicate one 
fifth of the events. This is a huge waste of network bandwidth and cpu 
resources. If we can have a way to pre filter the events on broker side, mirror 
maker doesn't need to read all 5 copies of events any more, which can be a huge 
amount of savings when we have even more data centers in the future.

> Broker side filtering
> -
>
> Key: KAFKA-6020
> URL: https://issues.apache.org/jira/browse/KAFKA-6020
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer
>Reporter: Pavel Micka
>Priority: Major
>  Labels: needs-kip
>
> Currently, it is not possible to filter messages on broker side. Filtering 
> messages on broker side is convenient for filter with very low selectivity 
> (one message in few thousands). In my case it means to transfer several GB of 
> data to consumer, throw it away, take one message and do it again...
> While I understand that filtering by message body is not feasible (for 
> performance reasons), I propose to filter just by message key prefix. This 
> can be achieved even without any deserialization, as the prefix to be matched 
> can be passed as an array (hence the broker would do just array prefix 
> compare).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)