[jira] [Created] (KAFKA-17094) Make it possible to list registered KRaft nodes in order to know which nodes should be unregistered

2024-07-05 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-17094:


 Summary: Make it possible to list registered KRaft nodes in order 
to know which nodes should be unregistered
 Key: KAFKA-17094
 URL: https://issues.apache.org/jira/browse/KAFKA-17094
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.7.1
Reporter: Jakub Scholz


Kafka seems to require nodes that are removed from the cluster to be 
unregistered using the Kafka Admin API. If they are unregistred, that you might 
run into problems later. For example, after upgrade when you try to bump the 
KRaft metadata version, you might get an error like this:

 
{code:java}
g.apache.kafka.common.errors.InvalidUpdateVersionException: Invalid update 
version 19 for feature metadata.version. Broker 3002 only supports versions 
1-14 {code}
In this case, 3002 is an old node that was removed before the upgrade and 
doesn't support the KRaft metadata version 19 and blocks the metadata update.

 

However, it seems to be impossible to list the registered nodes in order to 
unregister them:
 * The describe cluster metadata request in the Admin API seems to return only 
the IDs of running brokers
 * The describe metadata quorum command seems to list the removed nodes in the 
list of observers. But it does so only until the controller nodes are restarted.

If Kafka expects the inactive nodes to be registered, it should provide a list 
of the registered nodes so that it can be checked what nodes to unregister.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16606) JBOD support in KRaft does not seem to be gated by the metadata version

2024-04-23 Thread Jakub Scholz (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Scholz resolved KAFKA-16606.
--
Resolution: Not A Problem

> JBOD support in KRaft does not seem to be gated by the metadata version
> ---
>
> Key: KAFKA-16606
> URL: https://issues.apache.org/jira/browse/KAFKA-16606
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Jakub Scholz
>Priority: Major
>
> JBOD support in KRaft should be supported since Kafka 3.7.0. The Kafka 
> [source 
> code|https://github.com/apache/kafka/blob/1b301b30207ed8fca9f0aea5cf940b0353a1abca/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java#L194-L195]
>  suggests that it is supported with the metadata version {{{}3.7-IV2{}}}. 
> However, it seems to be possible to run KRaft cluster with JBOD even with 
> older metadata versions such as {{{}3.6{}}}. For example, I have a cluster 
> using the {{3.6}} metadata version:
> {code:java}
> bin/kafka-features.sh --bootstrap-server localhost:9092 describe
> Feature: metadata.version       SupportedMinVersion: 3.0-IV1    
> SupportedMaxVersion: 3.7-IV4    FinalizedVersionLevel: 3.6-IV2  Epoch: 1375 
> {code}
> Yet a KRaft cluster with JBOD seems to run fine:
> {code:java}
> bin/kafka-log-dirs.sh --bootstrap-server localhost:9092 --describe
> Querying brokers for log directories information
> Received log directory information from brokers 2000,3000,1000
> 

[jira] [Created] (KAFKA-16606) JBOD support in KRaft does not seem to be gated by the metadata version

2024-04-23 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-16606:


 Summary: JBOD support in KRaft does not seem to be gated by the 
metadata version
 Key: KAFKA-16606
 URL: https://issues.apache.org/jira/browse/KAFKA-16606
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.7.0
Reporter: Jakub Scholz


JBOD support in KRaft should be supported since Kafka 3.7.0. The Kafka [source 
code|https://github.com/apache/kafka/blob/1b301b30207ed8fca9f0aea5cf940b0353a1abca/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java#L194-L195]
 suggests that it is supported with the metadata version {{{}3.7-IV2{}}}. 
However, it seems to be possible to run KRaft cluster with JBOD even with older 
metadata versions such as {{{}3.6{}}}. For example, I have a cluster using the 
{{3.6}} metadata version:
{code:java}
bin/kafka-features.sh --bootstrap-server localhost:9092 describe
Feature: metadata.version       SupportedMinVersion: 3.0-IV1    
SupportedMaxVersion: 3.7-IV4    FinalizedVersionLevel: 3.6-IV2  Epoch: 1375 
{code}
Yet a KRaft cluster with JBOD seems to run fine:
{code:java}
bin/kafka-log-dirs.sh --bootstrap-server localhost:9092 --describe
Querying brokers for log directories information
Received log directory information from brokers 2000,3000,1000

[jira] [Created] (KAFKA-16131) Repeated UnsupportedVersionException logged when running Kafka 3.7.0-RC2 KRaft cluster with metadata version 3.6

2024-01-15 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-16131:


 Summary: Repeated UnsupportedVersionException logged when running 
Kafka 3.7.0-RC2 KRaft cluster with metadata version 3.6
 Key: KAFKA-16131
 URL: https://issues.apache.org/jira/browse/KAFKA-16131
 Project: Kafka
  Issue Type: Bug
Reporter: Jakub Scholz
 Fix For: 3.7.0


When running Kafka 3.7.0-RC2 as a KRaft cluster with metadata version set to 
3.6-IV2 metadata version, it throws repeated errors like this in the controller 
logs:
{quote}2024-01-13 16:58:01,197 INFO [QuorumController id=0] 
assignReplicasToDirs: event failed with UnsupportedVersionException in 15 
microseconds. (org.apache.kafka.controller.QuorumController) 
[quorum-controller-0-event-handler]
2024-01-13 16:58:01,197 ERROR [ControllerApis nodeId=0] Unexpected error 
handling request RequestHeader(apiKey=ASSIGN_REPLICAS_TO_DIRS, apiVersion=0, 
clientId=1000, correlationId=14, headerVersion=2) – 
AssignReplicasToDirsRequestData(brokerId=1000, brokerEpoch=5, 
directories=[DirectoryData(id=w_uxN7pwQ6eXSMrOKceYIQ, 
topics=[TopicData(topicId=bvAKLSwmR7iJoKv2yZgygQ, 
partitions=[PartitionData(partitionIndex=2), PartitionData(partitionIndex=1)]), 
TopicData(topicId=uNe7f5VrQgO0zST6yH1jDQ, 
partitions=[PartitionData(partitionIndex=0)])])]) with context 
RequestContext(header=RequestHeader(apiKey=ASSIGN_REPLICAS_TO_DIRS, 
apiVersion=0, clientId=1000, correlationId=14, headerVersion=2), 
connectionId='172.16.14.219:9090-172.16.14.217:53590-7', 
clientAddress=/[172.16.14.217|http://172.16.14.217/], 
principal=User:CN=my-cluster-kafka,O=io.strimzi, 
listenerName=ListenerName(CONTROLPLANE-9090), securityProtocol=SSL, 
clientInformation=ClientInformation(softwareName=apache-kafka-java, 
softwareVersion=3.7.0), fromPrivilegedListener=false, 
principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@71004ad2])
 (kafka.server.ControllerApis) [quorum-controller-0-event-handler]
java.util.concurrent.CompletionException: 
org.apache.kafka.common.errors.UnsupportedVersionException: Directory 
assignment is not supported yet.
at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332)
 at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347)
 at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636)
 at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
 at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
 at 
org.apache.kafka.controller.QuorumController$ControllerWriteEvent.complete(QuorumController.java:880)
 at 
org.apache.kafka.controller.QuorumController$ControllerWriteEvent.handleException(QuorumController.java:871)
 at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.completeWithException(KafkaEventQueue.java:148)
 at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:137)
 at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
 at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
 at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: org.apache.kafka.common.errors.UnsupportedVersionException: 
Directory assignment is not supported yet.
{quote}
 
With the metadata version set to 3.6-IV2, it makes sense that the request is 
not supported. But the request should in such case not be sent at all.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16075) TLS configuration not validated in KRaft controller-only nodes

2024-01-02 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-16075:


 Summary: TLS configuration not validated in KRaft controller-only 
nodes
 Key: KAFKA-16075
 URL: https://issues.apache.org/jira/browse/KAFKA-16075
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.6.1
Reporter: Jakub Scholz


When the Kafka broker node (either a broker in ZooKeeper based cluster or node 
with a broker role in a KRaft cluster) has an incorrect TLS configuration such 
as unsupported TLS cipher suite, it seems to throw a {{ConfigException}} and 
shutdown:
{code:java}
2024-01-02 13:50:24,895 ERROR Exiting Kafka due to fatal exception during 
startup. (kafka.Kafka$) [main]
org.apache.kafka.common.config.ConfigException: Invalid value 
java.lang.IllegalArgumentException: Unsupported CipherSuite: 
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 for configuration A client SSLEngine 
created with the provided settings can't connect to a server SSLEngine created 
with those settings.
at 
org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:102)
at 
org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:73)
at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
at kafka.network.Processor.(SocketServer.scala:973)
at kafka.network.Acceptor.newProcessor(SocketServer.scala:879)
at 
kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:849)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
at kafka.network.Acceptor.addProcessors(SocketServer.scala:848)
at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:523)
at 
kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:251)
at kafka.network.SocketServer.$anonfun$new$31(SocketServer.scala:175)
at 
kafka.network.SocketServer.$anonfun$new$31$adapted(SocketServer.scala:175)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at kafka.network.SocketServer.(SocketServer.scala:175)
at kafka.server.BrokerServer.startup(BrokerServer.scala:242)
at 
kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:96)
at 
kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:96)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:96)
at kafka.Kafka$.main(Kafka.scala:113)
at kafka.Kafka.main(Kafka.scala) {code}
But in a KRaft controller-only nodes, such validation does not seem to happen 
and the broker keeps running and looping with this warning:
{code:java}
2024-01-02 13:53:10,186 WARN [RaftManager id=1] Error connecting to node 
my-cluster-controllers-0.my-cluster-kafka-brokers.myproject.svc.cluster.local:9090
 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient) 
[kafka-1-raft-outbound-request-thread]
java.io.IOException: Channel could not be created for socket 
java.nio.channels.SocketChannel[closed]
at 
org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:348)
at 
org.apache.kafka.common.network.Selector.registerChannel(Selector.java:329)
at org.apache.kafka.common.network.Selector.connect(Selector.java:256)
at 
org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:1032)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301)
at 
org.apache.kafka.server.util.InterBrokerSendThread.sendRequests(InterBrokerSendThread.java:145)
at 
org.apache.kafka.server.util.InterBrokerSendThread.pollOnce(InterBrokerSendThread.java:108)
at 
org.apache.kafka.server.util.InterBrokerSendThread.doWork(InterBrokerSendThread.java:136)
at 
org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130)
Caused by: org.apache.kafka.common.KafkaException: 
java.lang.IllegalArgumentException: Unsupported CipherSuite: 
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
at 
org.apache.kafka.common.network.SslChannelBuilder.buildChannel(SslChannelBuilder.java:111)
at 
org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:338)
... 8 more
Caused by: java.lang.IllegalArgumentException: Unsupported CipherSuite: 
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
at 
java.base/sun.security.ssl.CipherSuite.validValuesOf(CipherSuite.java:978)
at 
java.base/sun.security.ssl.SSLEngineImpl.setEnabledCipherSuites(SSLEngineImpl.java:864)
at 

[jira] [Created] (KAFKA-15949) Improve the KRaft metadata version related messages

2023-11-29 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-15949:


 Summary: Improve the KRaft metadata version related messages
 Key: KAFKA-15949
 URL: https://issues.apache.org/jira/browse/KAFKA-15949
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 3.6.0
Reporter: Jakub Scholz


Various error messages related to KRaft seem to use very different style and 
formatting. Just for example in the {{StorageTool}} Scala class, there are two 
different examples:
 * {{Must specify a valid KRaft metadata version of at least 3.0.}}
 ** Refers to "metadata version"
 ** Refers to the version as 3.0 (although strictly speaking 3.0-IV0 is not 
valid for KRaft)
 * {{SCRAM is only supported in metadataVersion IBP_3_5_IV2 or later.}}
 ** Talks about "metadataVersion"
 ** Refers to "IBP_3_5_IV2" instead of "3.5" or "3.5-IV2"

Other pieces of Kafka code seem to also talk about "metadata.version" for 
example.

For users, it would be nice if the style and formats used were the same 
everywhere. Would it be worth unifying messages like this? If yes, what would 
be the preferred style to use?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15464) Allow dynamic reloading of certificates with different DN / SANs

2023-09-13 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-15464:


 Summary: Allow dynamic reloading of certificates with different DN 
/ SANs
 Key: KAFKA-15464
 URL: https://issues.apache.org/jira/browse/KAFKA-15464
 Project: Kafka
  Issue Type: Improvement
Reporter: Jakub Scholz


Kafka currently doesn't allow dynamic reloading of keystores when the new key 
has a different DN or removes some of the SANs. While it might help to prevent 
users from breaking their cluster, in some cases it would be great to be able 
to bypass this validation when desired.

More details are in the [KIP-978: Allow dynamic reloading of certificates with 
different DN / 
SANs|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263429128]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14941) Document which configuration options are applicable only to processes with broker role or controller role

2023-04-26 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-14941:


 Summary: Document which configuration options are applicable only 
to processes with broker role or controller role
 Key: KAFKA-14941
 URL: https://issues.apache.org/jira/browse/KAFKA-14941
 Project: Kafka
  Issue Type: Improvement
Reporter: Jakub Scholz


When running in KRaft mode, some of the configuration options are applicable 
only to nodes with the broker process role and some are applicable only to the 
nodes with the controller process roles. It would be great if this information 
was part of the documentation (e.g. in the [Broker 
Configs|https://kafka.apache.org/documentation/#brokerconfigs] table on the 
website), but if it was also part of the config classes so that it can be used 
in situations when the configuration is dynamically configured to for example 
filter the options applicable to different nodes. This would allow having 
configuration files with only the actually used configuration options and for 
example, help to reduce unnecessary restarts when rolling out new 
configurations etc.

For some options, it seems clear and the Kafka node would refuse to start if 
they are set - for example the configurations of the non-controler-listeners in 
controller-only nodes. For others, it seems a bit less clear (Does 
{{compression.type}} option apply to controller-only nodes? Or the 
configurations for the offset topic? etc.).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14357) Make it possible to batch describe requests in the Kafka Admin API

2022-11-04 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-14357:


 Summary: Make it possible to batch describe requests in the Kafka 
Admin API
 Key: KAFKA-14357
 URL: https://issues.apache.org/jira/browse/KAFKA-14357
 Project: Kafka
  Issue Type: Improvement
Reporter: Jakub Scholz


The Admin API has several methods to describe different objects such as ACLs, 
Quotas or SCRAM-SHA users. But these API seem to be usable only in one for the 
two modes:
 * Query or one users ACLs / Quotas / SCRAM-SHA credentials
 * Query all existing ACLs / Quotas / SCRAM-SHA credentials

But there seems to be no way how to batch the describe requests for multiple 
users. E.g. {_}describe ACLs of users Joe, John and Mike{_}. It would be nice 
to have such option as it might make it easier for applications using the Admin 
API to make less different API calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14356) Make it possible to detect changes to SCRAM-SHA credentials using the Admin API

2022-11-04 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-14356:


 Summary: Make it possible to detect changes to SCRAM-SHA 
credentials using the Admin API
 Key: KAFKA-14356
 URL: https://issues.apache.org/jira/browse/KAFKA-14356
 Project: Kafka
  Issue Type: Improvement
Reporter: Jakub Scholz


When using the Kafka Admin API to manage SCRAM-SHA credentials, the API seems 
to offer only three options:
 * Find out if given user has any credentials
 * Set SCRAM-SHA credentials
 * Delete SCRAM-SHA credentials

There is now way how to find out what the current credentials are. That makes 
sense as that can lead to the credentials being leaked which would be a 
security issue. However, there is also no way how to find out if the 
credentials changed since last time.

So if you have an external tool which is managing the SCRAM-SHA credentials 
based on some desired state in a controller loop (such as for example a 
Kubernetes Operator would do), there is no way to know whether you need to 
update the password in Kafka or not. And as a result, you always have to update 
the credentials.

It would be great to have some mechanism to detect if the credentials changes 
since last time. E.g.:
 * Timestamp of the last change
 * Some random hash assigned during each change of the credentials which can be 
compared before updating the credentials
 * Or possibly some offset of the KRaft metadata log where the credentials are 
stored.

An application managing the passwords, would get the indicator as a response to 
the the call updating the password and can store it. And in the next loop, it 
could describe the credentials which would return the latest indicator, compare 
it with what it stored and if they would be equal, it would know that it does 
not need to update the credentials.

If providing such indicator as part of the describe request would not be 
considered secure, maybe at least there can be some kind of conditional update 
call. Where the tool managing the passwords would get the change indicator as 
response to the update call. And will pass it in the subsequent update calls 
and the broker will evaluate it server-side if it changed or not and if it 
should be updated or ignored.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-13937) StandardAuthorizer throws "ID 5t1jQ3zWSfeVLMYkN3uong not found in aclsById" exceptions into broker logs

2022-05-25 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-13937:


 Summary: StandardAuthorizer throws "ID 5t1jQ3zWSfeVLMYkN3uong not 
found in aclsById" exceptions into broker logs
 Key: KAFKA-13937
 URL: https://issues.apache.org/jira/browse/KAFKA-13937
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Jakub Scholz


I'm trying to use the new {{StandardAuthorizer}} in a Kafka cluster running in 
KRaft mode. When managing the ACLs using the Admin API, the authorizer seems to 
throw a lot of runtime exceptions in the log. For example ...

When creating an ACL rule, it seems to create it just fine. But it throws the 
following exception:
{code:java}
2022-05-25 11:09:18,074 ERROR [StandardAuthorizer 0] addAcl error 
(org.apache.kafka.metadata.authorizer.StandardAuthorizerData) [EventHandler]
java.lang.RuntimeException: An ACL with ID 5t1jQ3zWSfeVLMYkN3uong already 
exists.
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    at 
java.base/java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18$adapted(BrokerMetadataPublisher.scala:221)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataPublisher.publish(BrokerMetadataPublisher.scala:221)
    at 
kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$publish(BrokerMetadataListener.scala:258)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2(BrokerMetadataListener.scala:119)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2$adapted(BrokerMetadataListener.scala:119)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:119)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    at java.base/java.lang.Thread.run(Thread.java:829)
2022-05-25 11:09:18,076 ERROR [BrokerMetadataPublisher id=0] Error publishing 
broker metadata at OffsetAndEpoch(offset=3, epoch=1) 
(kafka.server.metadata.BrokerMetadataPublisher) [EventHandler]
java.lang.RuntimeException: An ACL with ID 5t1jQ3zWSfeVLMYkN3uong already 
exists.
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    at 
java.base/java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18$adapted(BrokerMetadataPublisher.scala:221)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataPublisher.publish(BrokerMetadataPublisher.scala:221)
    at 
kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$publish(BrokerMetadataListener.scala:258)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2(BrokerMetadataListener.scala:119)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2$adapted(BrokerMetadataListener.scala:119)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:119)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    at java.base/java.lang.Thread.run(Thread.java:829)
2022-05-25 11:09:18,077 ERROR [BrokerMetadataListener id=0] Unexpected error 
handling HandleCommitsEvent (kafka.server.metadata.BrokerMetadataListener) 
[EventHandler]
java.lang.RuntimeException: An ACL with ID 5t1jQ3zWSfeVLMYkN3uong already 
exists.
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    at 

[jira] [Created] (KAFKA-13839) Example connectors Maven artifact should use provided dependencies

2022-04-20 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-13839:


 Summary: Example connectors Maven artifact should use provided 
dependencies
 Key: KAFKA-13839
 URL: https://issues.apache.org/jira/browse/KAFKA-13839
 Project: Kafka
  Issue Type: Bug
Reporter: Jakub Scholz


The {{connect-file}} artifact which contains the sample 
{{FileStreamSourceConnector}} and {{FileStreamSourceConnector}} connectors has 
currently 2 Maven dependencies:
 * {{connect-api}}
 * {{slf4j}}

Both are marked as runtime dependencies. So when the connectors are pulled from 
Maven repository to be added to the plugin path, they pull with them also these 
dependencies and other transitive dependencies (such as {{{}kafka-clients{}}}). 
This seems unnecessary since all these dependencies are already in the 
classpath of Kafka itself and do not need to be there again in the plugin path. 
They should be configured as {{provided}} so that they are not downloaded with 
the connectors.

This is now more relevant after 
https://issues.apache.org/jira/browse/KAFKA-13748, when the connectors are not 
anymore on the main classpath and downloading them from Maven to the plugin 
path would be more common.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-9121) Mirror Maker 2.0 doesn't handle the topic names in consumer checkpoints properly when topic name contain separator

2019-10-30 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-9121:
---

 Summary: Mirror Maker 2.0 doesn't handle the topic names in 
consumer checkpoints properly when topic name contain separator
 Key: KAFKA-9121
 URL: https://issues.apache.org/jira/browse/KAFKA-9121
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.4.0
Reporter: Jakub Scholz


I was trying the Kafka Mirror Maker 2.0 and run into the following situation:

1) I have 2 Kafka clusters with topic {{kafka-test-apps}} topic
2) I configured Mirror Maker with {{replication.policy.separator=-}} and with 
mirroring between cluster {{a}} and {{b}}.
3) When running Mirror Maker the mirroring of topics works fine. But when I use 
the {{RemoteClusterUtils}} to recover the offsets, the names of the topics for 
which the offsets are found are {{a-kafka-test-apps}} and {{apps}}. While the 
expected topic names would be {{a-kafka-test-apps}} and {{kafka-test-apps}}.

I tried to find the issue, but didn't found it so far. But it doesn't seem to 
be in {{RemoteClusterUtils}} because the topic names seem to be wrong already 
in {{checkpoints.internal}} topic. So it is probably already processed in the 
wrong way in the source cluster. 

When I use {{.}} as the separator, it seems to work fine for me. It looks like 
the problem is only when the topci names contain already the separator in the 
original topic name. But using the right separator might not be a solution for 
this, because you migth have topics with different characters and always have 
this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8053) kafka-topics.sh gives confusing error message when the topic doesn't exist

2019-03-06 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-8053:
---

 Summary: kafka-topics.sh gives confusing error message when the 
topic doesn't exist
 Key: KAFKA-8053
 URL: https://issues.apache.org/jira/browse/KAFKA-8053
 Project: Kafka
  Issue Type: Bug
Reporter: Jakub Scholz


The kafka-topics.sh utility gives a confusing message when the topic it is 
called with doesn't exist or when no topics exist at all:

{code}
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic xxx
Error while executing topic command : Topics in [] does not exist
[2019-03-06 13:26:33,982] ERROR java.lang.IllegalArgumentException: Topics in 
[] does not exist
at 
kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:416)
at 
kafka.admin.TopicCommand$ZookeeperTopicService.describeTopic(TopicCommand.scala:332)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:66)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
(kafka.admin.TopicCommand$)
{code}

It tries to list the topics, but because list of topics is always empty, it 
always prints just `[]`. The error message should be more useful and instead 
list the topic passed by the user as the parameter or not try to list anything 
at all.

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2544) Replication tools wiki page needs to be updated

2018-02-23 Thread Jakub Scholz (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Scholz resolved KAFKA-2544.
-
Resolution: Fixed
  Assignee: Jakub Scholz

This issue has been resolved.

> Replication tools wiki page needs to be updated
> ---
>
> Key: KAFKA-2544
> URL: https://issues.apache.org/jira/browse/KAFKA-2544
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Jakub Scholz
>Priority: Minor
>  Labels: documentation, newbie
>
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools is 
> outdated, mentions tools which have been heavily refactored or replaced by 
> other tools, e.g. add partition tool, list/create topics tools, etc.
> Please have the replication tools wiki page updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6008) Kafka Connect: Unsanitized workerID causes exception during startup

2017-10-04 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-6008:
---

 Summary: Kafka Connect: Unsanitized workerID causes exception 
during startup
 Key: KAFKA-6008
 URL: https://issues.apache.org/jira/browse/KAFKA-6008
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.0.0
 Environment: MacOS, Java 1.8.0_77-b03
Reporter: Jakub Scholz
 Fix For: 1.0.0


When KAfka Connect starts, it seems to use unsanitized workerId for creating 
Metrics. As a result it throws following exception:
{code}
[2017-10-04 13:16:08,886] WARN Error registering AppInfo mbean 
(org.apache.kafka.common.utils.AppInfoParser:66)
javax.management.MalformedObjectNameException: Invalid character ':' in value 
part of property
at javax.management.ObjectName.construct(ObjectName.java:618)
at javax.management.ObjectName.(ObjectName.java:1382)
at 
org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:60)
at 
org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:77)
at org.apache.kafka.connect.runtime.Worker.(Worker.java:88)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:81)
{code}

It looks like in my case the generated workerId is :. The 
workerId should be sanitized before creating the metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5969) bin/kafka-preferred-replica-election.sh gives misleading error when invalid JSON file is passed as parameter

2017-09-25 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-5969:
---

 Summary: bin/kafka-preferred-replica-election.sh gives misleading 
error when invalid JSON file is passed as parameter
 Key: KAFKA-5969
 URL: https://issues.apache.org/jira/browse/KAFKA-5969
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Jakub Scholz


When invalid JSON file is passed to the bin/kafka-preferred-replica-election.sh 
/ PreferredReplicaLeaderElectionCommand tool it gives a misleading error:
{code}
kafka.admin.AdminOperationException: Preferred replica election data is empty
at 
kafka.admin.PreferredReplicaLeaderElectionCommand$.parsePreferredReplicaElectionData(PreferredReplicaLeaderElectionCommand.scala:97)
at 
kafka.admin.PreferredReplicaLeaderElectionCommand$.main(PreferredReplicaLeaderElectionCommand.scala:66)
at 
kafka.admin.PreferredReplicaLeaderElectionCommand.main(PreferredReplicaLeaderElectionCommand.scala)
{code}

It suggests that the data is empty instead of invalid. This can confuse people. 
The exception text should be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5940) kafka-delete-records.sh doesn't give any feedback when the JSON offset configuration file is invalid

2017-09-20 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-5940:
---

 Summary: kafka-delete-records.sh doesn't give any feedback when 
the JSON offset configuration file is invalid
 Key: KAFKA-5940
 URL: https://issues.apache.org/jira/browse/KAFKA-5940
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Jakub Scholz
Assignee: Jakub Scholz


When deleting records using {{bin/kafka-delete-records.sh}}, the user has to 
pass a JSON file with the list of topics/partitions and the offset to which the 
records should be deleted. However, currently when such file is invalid the 
utility doesn't print any visible error:

{code}
$ bin/kafka-delete-records.sh --bootstrap-server localhost:9092 
--offset-json-file offset.json
Executing records delete operation
Records delete operation completed:
$
{code}

Instead, I would suggest that it throws an exception to make it clear that the 
problem is the invalid JSON file:
{code}
$ bin/kafka-delete-records.sh --bootstrap-server localhost:9092 
--offset-json-file offset.json
Exception in thread "main" kafka.common.AdminCommandFailedException: Offset 
json file doesn't contain valid JSON data.
at 
kafka.admin.DeleteRecordsCommand$.parseOffsetJsonStringWithoutDedup(DeleteRecordsCommand.scala:54)
at 
kafka.admin.DeleteRecordsCommand$.execute(DeleteRecordsCommand.scala:62)
at kafka.admin.DeleteRecordsCommand$.main(DeleteRecordsCommand.scala:37)
at kafka.admin.DeleteRecordsCommand.main(DeleteRecordsCommand.scala)
$
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5933) Move timeout and validate_only protocol fields into CommonFields class

2017-09-19 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-5933:
---

 Summary: Move timeout and validate_only protocol fields into 
CommonFields class
 Key: KAFKA-5933
 URL: https://issues.apache.org/jira/browse/KAFKA-5933
 Project: Kafka
  Issue Type: Bug
Reporter: Jakub Scholz


Most of the fields which are shared by multiple protocol messages (requests ad 
responses) are in the CommonFields class. However there are still some fields 
used multiple times which are not there yet:
* timeout
* validate_only

It would be good to move also these two fields into the CommonFields class so 
that they can be easily shared by different messages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5918) Fix minor typos and errors in the Kafka Streams turotial

2017-09-17 Thread Jakub Scholz (JIRA)
Jakub Scholz created KAFKA-5918:
---

 Summary: Fix minor typos and errors in the Kafka Streams turotial
 Key: KAFKA-5918
 URL: https://issues.apache.org/jira/browse/KAFKA-5918
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Jakub Scholz
Priority: Minor
 Fix For: 1.0.0


I found several minor issues with the Kafka Streams tutorial:
* Some typos
**  "As shown above, it illustrate that the constructed ..." instead of "As 
shown above, it illustrate_s_ that the constructed ..."
** "same as Pipe.java below" instead of "same as Pipe.java _above_"
** Wrong class name in the {{LineSplit}} example
* Incorrect imports for the code examples
** Missing {{import org.apache.kafka.streams.kstream.KStream;}} in 
{{LineSplit}} and {{WordCount}} example
* Unnecessary (and potentially confusing) split by whitespaces in the 
{{WorkCount}} class (the split into words happened already in {{LineSplit}})



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)