[jira] [Created] (KAFKA-17749) Throttle metrics have changed name

2024-10-09 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17749:
--

 Summary: Throttle metrics have changed name
 Key: KAFKA-17749
 URL: https://issues.apache.org/jira/browse/KAFKA-17749
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.8.0, 3.9.0
Reporter: Mickael Maison
Assignee: Mickael Maison


In 
https://github.com/apache/kafka/commit/e4e1116156d44d5e7a52ad8fb51a57d5e5755710,
 we moved the Throttler class to the storage module but this made the metrics 
emitted from this class change name.

Since 3.8 the metrics are named 
org.apache.kafka.storage.internals.utils.Throttler. Previously they were called 
kafka.util.Thottler



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-1207) Launch Kafka from within Apache Mesos

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-1207.
---
Resolution: Won't Do

Apache Mesos has been retired for a few years already, closing this issue.

> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Joe Stein
>Priority: Major
>  Labels: mesos
> Attachments: KAFKA-1207.patch, KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch
>
>
> There are a few components to this.
> 1) The Framework:  This is going to be responsible for starting up and 
> managing the fail over of brokers within the mesos cluster.  This will have 
> to get some Kafka focused paramaters for launching new replica brokers, 
> moving topics and partitions around based on what is happening in the grid 
> through time.
> 2) The Scheduler: This is what is going to ask for resources for Kafka 
> brokers (new ones, replacement ones, commissioned ones) and other operations 
> such as stopping tasks (decommissioning brokers).  I think this should also 
> expose a user interface (or at least a rest api) for producers and consumers 
> so we can have producers and consumers run inside of the mesos cluster if 
> folks want (just add the jar)
> 3) The Executor : This is the task launcher.  It launches tasks kills them 
> off.
> 4) Sharing data between Scheduler and Executor: I looked at the a few 
> implementations of this.  I like parts of the Storm implementation but think 
> using the environment variable 
> ExectorInfo.CommandInfo.Enviornment.Variables[] is the best shot.  We can 
> have a command line bin/kafka-mesos-scheduler-start.sh that would build the 
> contrib project if not already built and support conf/server.properties to 
> start.
> The Framework and operating Scheduler would run in on an administrative node. 
>  I am probably going to hook Apache Curator into it so it can do it's own 
> failure to a another follower.  Running more than 2 should be sufficient as 
> long as it can bring back it's state (e.g. from zk).  I think we can add this 
> in after once everything is working.
> Additional detail can be found on the Wiki page 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38570672



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-1407) Broker can not return to ISR because of BadVersionException

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-1407.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Broker can not return to ISR because of BadVersionException
> ---
>
> Key: KAFKA-1407
> URL: https://issues.apache.org/jira/browse/KAFKA-1407
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1, 2.4.1
>Reporter: Dmitry Bugaychenko
>Assignee: Neha Narkhede
>Priority: Critical
>
> Each morning we found a broker out of ISR at stuck with log full of messages:
> {code}
> INFO   | jvm 1| 2014/04/21 08:36:21 | [2014-04-21 09:36:21,907] ERROR 
> Conditional update of path /brokers/topics/topic2/partitions/1/state with 
> data 
> {"controller_epoch":46,"leader":2,"version":1,"leader_epoch":38,"isr":[2]} 
> and expected version 53 failed due to 
> org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
> BadVersion for /brokers/topics/topic2/partitions/1/state 
> (kafka.utils.ZkUtils$)
> INFO   | jvm 1| 2014/04/21 08:36:21 | [2014-04-21 09:36:21,907] INFO 
> Partition [topic2,1] on broker 2: Cached zkVersion [53] not equal to that in 
> zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> It seems that it can not recover after short netwrok break down and the only 
> way to return it is restart it using kill -9.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-1599) Change preferred replica election admin command to handle large clusters

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-1599.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Change preferred replica election admin command to handle large clusters
> 
>
> Key: KAFKA-1599
> URL: https://issues.apache.org/jira/browse/KAFKA-1599
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Todd Palino
>Assignee: Abhishek Nigam
>Priority: Major
>  Labels: newbie++
>
> We ran into a problem with a cluster that has 70k partitions where we could 
> not trigger a preferred replica election for all topics and partitions using 
> the admin tool. Upon investigation, it was determined that this was because 
> the JSON object that was being written to the admin znode to tell the 
> controller to start the election was 1.8 MB in size. As the default Zookeeper 
> data size limit is 1MB, and it is non-trivial to change, we should come up 
> with a better way to represent the list of topics and partitions for this 
> admin command.
> I have several thoughts on this so far:
> 1) Trigger the command for all topics and partitions with a JSON object that 
> does not include an explicit list of them (i.e. a flag that says "all 
> partitions")
> 2) Use a more compact JSON representation. Currently, the JSON contains a 
> 'partitions' key which holds a list of dictionaries that each have a 'topic' 
> and 'partition' key, and there must be one list item for each partition. This 
> results in a lot of repetition of key names that is unneeded. Changing this 
> to a format like this would be much more compact:
> {'topics': {'topicName1': [0, 1, 2, 3], 'topicName2': [0,1]}, 'version': 1}
> 3) Use a representation other than JSON. Strings are inefficient. A binary 
> format would be the most compact. This does put a greater burden on tools and 
> scripts that do not use the inbuilt libraries, but it is not too high.
> 4) Use a representation that involves multiple znodes. A structured tree in 
> the admin command would probably provide the most complete solution. However, 
> we would need to make sure to not exceed the data size limit with a wide tree 
> (the list of children for any single znode cannot exceed the ZK data size of 
> 1MB)
> Obviously, there could be a combination of #1 with a change in the 
> representation, which would likely be appropriate as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-2448) BrokerChangeListener missed broker id path ephemeral node deletion event.

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-2448.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> BrokerChangeListener missed broker id path ephemeral node deletion event.
> -
>
> Key: KAFKA-2448
> URL: https://issues.apache.org/jira/browse/KAFKA-2448
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Flavio Paiva Junqueira
>Priority: Major
>
> When a broker get bounced, ideally the sequence should be like this:
> 1.1. Broker shutdown resources.
> 1.2. Broker close zkClient (this will cause the ephemeral node of 
> /brokers/ids/BROKER_ID to be deleted)
> 1.3. Broker restart and load the log segment
> 1.4. Broker create ephemeral node /brokers/ids/BROKER_ID
> The broker side log s are:
> {noformat}
> ...
> 2015/08/17 22:42:37.663 INFO [SocketServer] [Thread-1] [kafka-server] [] 
> [Socket Server on Broker 1140], Shutting down
> 2015/08/17 22:42:37.735 INFO [SocketServer] [Thread-1] [kafka-server] [] 
> [Socket Server on Broker 1140], Shutdown completed
> ...
> 2015/08/17 22:42:53.898 INFO [ZooKeeper] [Thread-1] [kafka-server] [] 
> Session: 0x14d43fd905f68d7 closed
> 2015/08/17 22:42:53.898 INFO [ClientCnxn] [main-EventThread] [kafka-server] 
> [] EventThread shut down
> 2015/08/17 22:42:53.898 INFO [KafkaServer] [Thread-1] [kafka-server] [] 
> [Kafka Server 1140], shut down completed
> ...
> 2015/08/17 22:43:03.306 INFO [ClientCnxn] 
> [main-SendThread(zk-ei1-kafkatest.stg.linkedin.com:12913)] [kafka-server] [] 
> Session establishment complete on server zk-ei1-kafkatest.stg.linkedin
> .com/172.20.73.211:12913, sessionid = 0x24d43fd93d96821, negotiated timeout = 
> 12000
> 2015/08/17 22:43:03.306 INFO [ZkClient] [main-EventThread] [kafka-server] [] 
> zookeeper state changed (SyncConnected)
> ...
> {noformat}
> On the controller side, the sequence should be:
> 2.1. Controlled shutdown the broker
> 2.2. BrokerChangeListener fired for /brokers/ids child change because 
> ephemeral node is deleted in step 1.2
> 2.3. BrokerChangeListener fired again for /borkers/ids child change because 
> the ephemeral node is created in 1.4
> The issue I saw was on controller side, the broker change listener only fired 
> once after step 1.4. So the controller did not see any broker change.
> {noformat}
> 2015/08/17 22:41:46.189 [KafkaController] [Controller 1507]: Shutting down 
> broker 1140
> ...
> 2015/08/17 22:42:38.031 [RequestSendThread] 
> [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
> to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
> ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
> 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
> : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
> broker.
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> 2015/08/17 22:42:38.031 [RequestSendThread] 
> [Controller-1507-to-broker-1140-send-thread], Controller 1507 connected to 
> 1140 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)) for sending 
> state change requests
> 2015/08/17 22:42:38.332 [RequestSendThread] 
> [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
> to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
> ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
> 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
> : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
> broker.
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> 
> 2015/08/17 22:43:09.035 [ReplicaStateMachine$BrokerChangeListener] 
> [BrokerChangeListener on Controller 1507]: Broker change listener fired for 
> path /brokers/ids with children 
> 1140,1282,1579,871,1556,872,1511,873,874,852,1575,875,1574,1530,854,857,858,859,1493,1272,880,1547,1568,1500,1521,863,864,865,867,1507
> 2015/08/17 22:43:09.082 

[jira] [Resolved] (KAFKA-12783) Remove the deprecated ZK-based partition reassignment API

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12783.

Resolution: Duplicate

> Remove the deprecated ZK-based partition reassignment API
> -
>
> Key: KAFKA-12783
> URL: https://issues.apache.org/jira/browse/KAFKA-12783
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
>
> ZK-based reassignment has been deprecated since AK 2.5.  It's time to remove 
> since the next major release is coming up (3.0)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7629) Mirror maker goes into infinite loop

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7629.
---
Resolution: Won't Fix

The original MirrorMaker tool has been removed from Kafka. You should now use 
the Connect based MirrorMaker: 
https://kafka.apache.org/documentation/#georeplication

> Mirror maker goes into infinite loop
> 
>
> Key: KAFKA-7629
> URL: https://issues.apache.org/jira/browse/KAFKA-7629
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.0.0
> Environment: local
>Reporter: Darshan Mehta
>Priority: Major
>
> *Setup:*
> I have 2 kafka images running Spoify Kafka image 
> [https://hub.docker.com/r/spotify/kafka]
> Config:
> Image 1:
>  * host: kafka1
>  * zk port : 2181
>  * broker port : 9092
> Image 2:
>  * host: kafka2
>  * zk port : 1181
>  * broker port : 8092
> Producer Properties for Mirror maker: 
> {code:java}
> bootstrap.servers=kafka2:8092
> {code}
> Consumer Properties for Mirror maker: 
> {code:java}
> bootstrap.servers=kafka1:9092
> group.id=test-consumer-group
> exclude.internal.topics=true
> {code}
>  
> *Steps to replicate :*
>  # Start mirror maker with following command : 
> {code:java}
> $KAFKA_INSTALLATION_DIR/bin/kafka-mirror-maker.sh --producer.config 
>  --consumer.config  
> --num.streams 1 --whitelist topic-1
> {code}
>  # Start local kafka console consumer to listen to topic-1 for kafka2:8092 
> {code:java}
> $KAFKA_INSTALLATION_DIR/bin/kafka-console-consumer.sh --bootstrap-server 
> kafka2:8092 --topic topic-1
> {code}
>  # Produce an event to kafka1:9092 - topic-1  -> It will be printed on the 
> console in Step 2
>  # Stop mirror maker with ctrl+C (started in step 1)
>  # Restart mirror maker with same command
>  # Produce an event onto the same topic (i.e. repeat step 3)
>  # Both source and destination will be flooded with the same messages until 
> mirror maker is stopped
> Surprisingly, source kafka also gets flooded with the same message. I believe 
> when restarted, the mirror maker is unable to read the state?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10726) How to detect heartbeat failure between broker/zookeeper leader

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10726.

Resolution: Information Provided

We are now removing ZooKeeper support so closing this issue.

> How to detect heartbeat failure between broker/zookeeper leader
> ---
>
> Key: KAFKA-10726
> URL: https://issues.apache.org/jira/browse/KAFKA-10726
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, logging
>Affects Versions: 2.1.1
>Reporter: Keiichiro Wakasa
>Priority: Critical
>
> Hello experts,
> I'm not sure this is proper place to ask but I'd appreciate if you could help 
> us with the following question...
>  
> We've continuously suffered from broker exclusion caused by heartbeat timeout 
> between broker and zookeeper leader.
> This issue can be easily detected by checking ephemeral nodes via zkcli.sh 
> but we'd like to detect this with logs like server.log/controller.log since 
> we have an existing system to forward these logs to our system. 
> Looking at server.log/controller.log, we couldn't find any logs that 
> indicates the heartbeat timeout. Is there any other logs to check for 
> heartbeat health?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8188) Zookeeper Connection Issue Take Down the Whole Kafka Cluster

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8188.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Zookeeper Connection Issue Take Down the Whole Kafka Cluster
> 
>
> Key: KAFKA-8188
> URL: https://issues.apache.org/jira/browse/KAFKA-8188
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: Candice Wan
>Priority: Critical
> Attachments: thread_dump.log
>
>
> We recently upgraded to 2.1.1 and saw below zookeeper connection issues which 
> took down the whole cluster. We've got 3 nodes in the cluster, 2 of which 
> threw below exceptions at the same second.
> 2019-04-03 08:25:19.603 [main-SendThread(host2:36100)] WARN 
> org.apache.zookeeper.ClientCnxn - Unable to reconnect to ZooKeeper service, 
> session 0x10071ff9baf0001 has expired
>  2019-04-03 08:25:19.603 [main-SendThread(host2:36100)] INFO 
> org.apache.zookeeper.ClientCnxn - Unable to reconnect to ZooKeeper service, 
> session 0x10071ff9baf0001 has expired, closing socket connection
>  2019-04-03 08:25:19.605 [main-EventThread] INFO 
> org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 
> 0x10071ff9baf0001
>  2019-04-03 08:25:19.605 [zk-session-expiry-handler0] INFO 
> kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Session expired.
>  2019-04-03 08:25:19.609 [zk-session-expiry-handler0] INFO 
> kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Initializing a new 
> session to host1:36100,host2:36100,host3:36100.
>  2019-04-03 08:25:19.610 [zk-session-expiry-handler0] INFO 
> org.apache.zookeeper.ZooKeeper - Initiating client connection, 
> connectString=host1:36100,host2:36100,host3:36100 sessionTimeout=6000 
> watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@12f8b1d8
>  2019-04-03 08:25:19.610 [zk-session-expiry-handler0] INFO 
> o.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> 'file:/app0/common/config/ldap-auth.config'. Will continue connection to 
> Zookeeper server without SASL authentication, if Zookeeper server allows it.
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
> host1/169.30.47.206:36100
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-EventThread] ERROR 
> kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Auth failed.
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> INFO org.apache.zookeeper.ClientCnxn - Socket connection established, 
> initiating session, client: /169.20.222.18:56876, server: 
> host1/169.30.47.206:36100
>  2019-04-03 08:25:19.612 [controller-event-thread] INFO 
> k.controller.PartitionStateMachine - [PartitionStateMachine controllerId=3] 
> Stopped partition state machine
>  2019-04-03 08:25:19.613 [controller-event-thread] INFO 
> kafka.controller.ReplicaStateMachine - [ReplicaStateMachine controllerId=3] 
> Stopped replica state machine
>  2019-04-03 08:25:19.614 [controller-event-thread] INFO 
> kafka.controller.KafkaController - [Controller id=3] Resigned
>  2019-04-03 08:25:19.615 [controller-event-thread] INFO 
> kafka.zk.KafkaZkClient - Creating /brokers/ids/3 (is it secure? false)
>  2019-04-03 08:25:19.628 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on 
> server host1/169.30.47.206:36100, sessionid = 0x1007f4d2b81, negotiated 
> timeout = 6000
>  2019-04-03 08:25:19.631 [/config/changes-event-process-thread] INFO 
> k.c.ZkNodeChangeNotificationListener - Processing notification(s) to 
> /config/changes
>  2019-04-03 08:25:19.637 [controller-event-thread] ERROR 
> k.zk.KafkaZkClient$CheckedEphemeral - Error while creating ephemeral at 
> /brokers/ids/3, node already exists and owner '72182936680464385' does not 
> match current session '72197563457011712'
>  2019-04-03 08:25:19.637 [controller-event-thread] INFO 
> kafka.zk.KafkaZkClient - Result of znode creation at /brokers/ids/3 is: 
> NODEEXISTS
>  2019-04-03 08:25:19.644 [controller-event-thread] ERROR 
> k.c.ControllerEventManager$ControllerEventThread - [ControllerEventThread 
> controllerId=3] Error processing event RegisterBrokerAndReelect
>  org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = 
> NodeExists
>  at org.apache.zookeeper.KeeperException.creat

[jira] [Resolved] (KAFKA-10041) Kafka upgrade fails from 1.1 to 2.4/2.5/trunk fails due to failure in ZooKeeper

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10041.

Resolution: Information Provided

We are now removing ZooKeeper support so closing this issue.

> Kafka upgrade fails from 1.1 to 2.4/2.5/trunk fails due to failure in 
> ZooKeeper
> ---
>
> Key: KAFKA-10041
> URL: https://issues.apache.org/jira/browse/KAFKA-10041
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 2.4.0, 2.5.0, 2.6.0
>Reporter: Zhuqi Jin
>Priority: Major
>
> When we tested upgrading Kafka from 1.1 to 2.4/2.5, the upgraded node failed 
> to start due to a known zookeeper failure - ZOOKEEPER-3056.
> The error message is shown below:
>  
> {code:java}
> [2020-05-24 23:45:17,638] ERROR Unexpected exception, exiting abnormally 
> (org.apache.zookeeper.server.ZooKeeperServerMain)
> java.io.IOException: No snapshot found, but there are log entries. Something 
> is broken!
>  at 
> org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:240)
>  at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240)
>  at 
> org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:290)
>  at 
> org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:450)
>  at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:764)
>  at 
> org.apache.zookeeper.server.ServerCnxnFactory.startup(ServerCnxnFactory.java:98)
>  at 
> org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:144)
>  at 
> org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
>  at 
> org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
>  at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
>  at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
> {code}
>  
> {code:java}
> [2020-05-24 23:45:25,142] ERROR Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for 
> connection while in state: CONNECTING
>  at 
> kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:259)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
>  at 
> kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:255)
>  at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:113)
>  at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1858)
>  at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:375)
>  at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:399)
>  at kafka.server.KafkaServer.startup(KafkaServer.scala:207)
>  at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
>  at kafka.Kafka$.main(Kafka.scala:84)
>  at kafka.Kafka.main(Kafka.scala){code}
> It can be reproduced through the following steps:
> 1. Start a single-node kafka 1.1. 
> 2. Create a topic and use kafka-producer-perf-test.sh to produce several 
> message.
> {code:java}
> bin/kafka-topics.sh --create --bootstrap-server localhost:9092 
> --replication-factor 1 --partitions 1 --topic test 
> bin/kafka-producer-perf-test.sh --topic test --num-records 500 --record-size 
> 300 --throughput 100 --producer-props bootstrap.servers=localhost:9092{code}
> 3. Upgrade the node to 2.4/2.5 with the same configuration. The new version 
> node failed to start because of the zookeeper.
> Kafka 1.1 is using dependant-libs-2.11.12/zookeeper-3.4.10.jar, and Kafka 
> 2.4/2.5/trunk(5302efb2d1b7a69bcd3173a13b2d08a2666979ed) are using 
> zookeeper-3.5.8.jar
> The bug is fixed in zookeeper-3.6.0, should we upgrade the dependency of 
> Kafka 2.4/2.5/trunk to use zookeeper-3.6.0.jar?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-6344) 0.8.2 clients will store invalid configuration in ZK for Kafka 1.0 brokers

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-6344.
---
Resolution: Fixed

We are now removing ZooKeeper support so closing this issue.

> 0.8.2 clients will store invalid configuration in ZK for Kafka 1.0 brokers
> --
>
> Key: KAFKA-6344
> URL: https://issues.apache.org/jira/browse/KAFKA-6344
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Vincent Bernat
>Priority: Major
>
> Hello,
> When using a Kafka 0.8.2 Scala client, the "changeTopicConfig" method from 
> AdminUtils will write the topic name to /config/changes/config_change_X. 
> Since 0.9, it is expected to have a JSON string and brokers will bail out if 
> it is not the case with a java.lang.IllegalArgumentException with message 
> "Config change notification has an unexpected value. The format 
> is:{\"version\" : 1, \"entity_type\":\"topics/clients\", \"entity_name\" : 
> \"topic_name/client_id\"} or {\"version\" : 2, 
> \"entity_path\":\"entity_type/entity_name\"}. Received: \"dns\"". Moreover, 
> the broker will shutdown after this error.
> As 1.0 brokers are expected to accept 0.8.x clients, either highlight in the 
> documentation this doesn't apply to AdminUtils or accept this "version 0" 
> format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-2952) Add ducktape test for secure->unsecure ZK migration

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-2952.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Add ducktape test for secure->unsecure ZK migration 
> 
>
> Key: KAFKA-2952
> URL: https://issues.apache.org/jira/browse/KAFKA-2952
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Paiva Junqueira
>Assignee: Flavio Paiva Junqueira
>Priority: Major
>
> We have test cases for the unsecure -> secure path, but not the other way 
> around, We should add it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7754) zookeeper-security-migration.sh sets the root ZNode as world-readable

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7754.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> zookeeper-security-migration.sh sets the root ZNode as world-readable
> -
>
> Key: KAFKA-7754
> URL: https://issues.apache.org/jira/browse/KAFKA-7754
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Badai Aqrandista
>Priority: Minor
>
> If I start broker with {{zookeeper.set.acl=true}} from the first time I start 
> the broker, the root ZNode is not set to be world-readable to allow other 
> application to share the Zookeeper ensemble with chroot.
> But if I run {{zookeeper-security-migration.sh}} with  {{–zookeeper.acl 
> secure}}, the root ZNode becomes world-readable. Is this correct?
>  
> {noformat}
> root@localhost:/# zookeeper-shell localhost:2181
> Connecting to localhost:2181
> Welcome to ZooKeeper!
> JLine support is enabled
> [zk: localhost:2181(CONNECTING) 0] 
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null
> WATCHER::
> WatchedEvent state:SaslAuthenticated type:None path:null
> [zk: localhost:2181(CONNECTED) 0] getAcl /
> 'world,'anyone
> : cdrwa
> [zk: localhost:2181(CONNECTED) 1] getAcl /brokers
> 'world,'anyone
> : r
> 'sasl,'kafkabroker
> : cdrwa
> [zk: localhost:2181(CONNECTED) 2] quit
> Quitting...
> root@localhost:/# zookeeper-security-migration --zookeeper.acl secure 
> --zookeeper.connect localhost:2181
> root@localhost:/# zookeeper-shell localhost:2181
> Connecting to localhost:2181
> Welcome to ZooKeeper!
> JLine support is enabled
> [zk: localhost:2181(CONNECTING) 0] 
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null
> WATCHER::
> WatchedEvent state:SaslAuthenticated type:None path:null
> [zk: localhost:2181(CONNECTED) 0] getAcl /
> 'world,'anyone
> : r
> 'sasl,'kafkabroker
> : cdrwa
> [zk: localhost:2181(CONNECTED) 1] getAcl /brokers
> 'world,'anyone
> : r
> 'sasl,'kafkabroker
> : cdrwa
> [zk: localhost:2181(CONNECTED) 2] 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7710) Poor Zookeeper ACL management with Kerberos

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7710.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Poor Zookeeper ACL management with Kerberos
> ---
>
> Key: KAFKA-7710
> URL: https://issues.apache.org/jira/browse/KAFKA-7710
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mr Kafka
>Priority: Major
>
> I have seen many organizations run many Kafka clusters. The simplest scenario 
> is you may have a *kafka.dev.example.com* cluster and a 
> *kafka.prod.example.com* cluster. The more extreme examples is teams within 
> an organization may run their own individual clusters and want isolation.
> When you enable Zookeeper ACLs in Kafka the ACL looks to be set to the 
> principal (SPN) that is used to authenticate against Zookeeper.
> For example I have brokers:
>  * *01.kafka.dev.example.com*
>  * *02.kafka.dev.example.com***
>  * *03.kafka.dev.example.com***
> On *01.kafka.dev.example.com* **I run the below the security-migration tool:
> {code:java}
> KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
>  -Dzookeeper.sasl.clientconfig=ZkClient" zookeeper-security-migration 
> --zookeeper.acl=secure --zookeeper.connect=a01.zookeeper.dev.example.com:2181
> {code}
> I end up with ACL's in Zookeeper as below:
> {code:java}
> # [zk: localhost:2181(CONNECTED) 2] getAcl /cluster
> # 'sasl,'kafka/01.kafka.dev.example.com@EXAMPLE
> # : cdrwa
> {code}
> This ACL means no other broker in the cluster can access the znode in 
> Zookeeper except broker 01.
> To resolve the issue you need to set the below properties in Zookeeper's 
> config:
> {code:java}
> kerberos.removeHostFromPrincipal = true
> kerberos.removeRealmFromPrincipal = true
> {code}
> Now when Kafka set ACL's they are stored as:
> {code:java}
> # [zk: localhost:2181(CONNECTED) 2] getAcl /cluster
> # 'sasl,'kafka
> #: cdrwa
> {code}
> This now means any broker in the cluster can access the ZK node.This means if 
> I have a dev Kafka broker it can right to a "prod.zookeeper.example.com" 
> zookeeper host as when it auth's based on a SPN 
> "kafka/01.kafka.dev.example.com" the host is dropped and we auth against the 
> service principal kafka.
> If your organization is flexible you may be able to create different Kerberos 
> Realms per cluster and use:
> {code:java}
> kerberos.removeHostFromPrincipal = true
> kerberos.removeRealmFromPrincipal = false{code}
> That means acl's will be in the format "kafka/REALM" which means only brokers 
> in the same realm can connect. The difficulty here is your average large 
> organization security team willing to create adhoc realms.
> *Proposal*
> Kafka support setting ACLs for all known brokers in the cluster i.e ACLs on a 
> Znode have
> {code:java}
> kafka/01.kafka.dev.example.com@EXAMPLE
> kafka/02.kafka.dev.example.com@EXAMPLE
> kafka/03.kafka.dev.example.com@EXAMPLE{code}
> With this though some kind of support will need to be added so if a new 
> broker joins the cluster the host ACL gets added to existing ZNodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7898) ERROR Caught unexpected throwable (org.apache.zookeeper.ClientCnxn)

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7898.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> ERROR Caught unexpected throwable (org.apache.zookeeper.ClientCnxn)
> ---
>
> Key: KAFKA-7898
> URL: https://issues.apache.org/jira/browse/KAFKA-7898
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gabriel Lukacs
>Priority: Major
>
> We observed a NullPointerException on one of our broker in 3 broker cluster 
> environment. If I list the processes and open ports it seems that the faulty 
> broker is running, but the kafka-connect (we used it also) periodically 
> restarts due to fact that it can not connect to the kafka cluster (configured 
> ssl & plaintext mode too). Is it a bug in kafka/zookeeper?
>  
> [2019-02-05 14:28:11,359] WARN Client session timed out, have not heard from 
> server in 4141ms for sessionid 0x310166e 
> (org.apache.zookeeper.ClientCnxn)
> [2019-02-05 14:28:12,525] ERROR Caught unexpected throwable 
> (org.apache.zookeeper.ClientCnxn)
> java.lang.NullPointerException
>  at 
> kafka.zookeeper.ZooKeeperClient$$anon$8.processResult(ZooKeeperClient.scala:217)
>  at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:633)
>  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
> [2019-02-05 14:28:12,526] ERROR Caught unexpected throwable 
> (org.apache.zookeeper.ClientCnxn)
> [2019-02-05 14:28:22,701] WARN Client session timed out, have not heard from 
> server in 4004ms for sessionid 0x310166e 
> (org.apache.zookeeper.ClientCnxn)
> [2019-02-05 14:28:28,670] WARN Client session timed out, have not heard from 
> server in 4049ms for sessionid 0x310166e 
> (org.apache.zookeeper.ClientCnxn)
> [2019-02-05 15:05:20,601] WARN [GroupCoordinator 1]: Failed to write empty 
> metadata for group 
> encodable-emvTokenAccess-delta-encoder-group-emvIssuerAccess-v2-2-0: The 
> group is rebalancing, so a rejoin is needed. 
> (kafka.coordinator.group.GroupCoordinator)
> kafka 7381 1 0 14:22 ? 00:00:19 java -Xmx512M -Xms512M -server -XX:+UseG1GC 
> -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true 
> -Xloggc:/opt/kafka/bin/../logs/zookeeper-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/opt/kafka/bin/../logs 
> -Dlog4j.configuration=file:/opt/kafka/config/zoo-log4j.properties -cp 
> /opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/compileScala.mapping:/opt/kafka/bin/../libs/compileScala.mapping.asc:/opt/kafka/bin/../libs/connect-api-2.1.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.1.0.jar:/opt/kafka/bin/../libs/connect-file-2.1.0.jar:/opt/kafka/bin/../libs/connect-json-2.1.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.1.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.1.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b42.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.7.jar:/opt/kafka/bin/../libs/jackson-core-2.9.7.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.7.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.7.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.7.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.7.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b42.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.27.jar:/opt/kafka/bin/../libs/jersey-common-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.27.jar:/opt/kafka/bin/../libs/jersey-hk2-2.27.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.27.jar:/opt/kafka/bin/../libs/jersey-server-2.27.jar:/opt/kafka/bin/../libs/jetty-client-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.12.v20180830.jar:/opt/kafka/

[jira] [Resolved] (KAFKA-6602) Support Kafka to save credentials in Java Key Store on Zookeeper node

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-6602.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Support Kafka to save credentials in Java Key Store on Zookeeper node
> -
>
> Key: KAFKA-6602
> URL: https://issues.apache.org/jira/browse/KAFKA-6602
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Chen He
>Assignee: Chen He
>Priority: Major
>
> Kafka connect needs to talk to multifarious distributed systems. However, 
> each system has its own authentication mechanism. How we manage these 
> credentials become a common problem. 
> Here are my thoughts:
>  # We may need to save it in java key store;
>  # We may need to put this key store in a distributed system (topic or 
> zookeeper);
>  # Key store password may be configured in Kafka configuration;
> I have implement the feature that allows store java key store in zookeeper 
> node. If Kafka community likes this idea, I am happy to contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-6940) Kafka Cluster and Zookeeper ensemble configuration with SASL authentication

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-6940.
---
Resolution: Information Provided

Securing ZooKeeper is covered in this section in the docs: 
https://kafka.apache.org/documentation/#zk_authz

> Kafka Cluster and Zookeeper ensemble configuration with SASL authentication
> ---
>
> Key: KAFKA-6940
> URL: https://issues.apache.org/jira/browse/KAFKA-6940
> Project: Kafka
>  Issue Type: Task
>  Components: core, security, zkclient
>Affects Versions: 0.11.0.2
> Environment: PRE Production
>Reporter: Shashank Jain
>Priority: Blocker
>  Labels: security, test
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Hi All, 
>  
>  
> I have a working  Kafka Cluster and Zookeeper Ensemble  but  after  
> integrating   SASL authentication I am facing below exception, 
>  
>  
> Zookeeper:- 
>  
>  
> 2018-05-23 07:39:59,476 [myid:1] - INFO  [ProcessThread(sid:1 cport:-1):: ] - 
> Got user-level KeeperException when processing sessionid:0x301cae0b3480002 
> type:delete cxid:0x48 zxid:0x2004e txntype:-1 reqpath:n/a Error 
> Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for 
> /admin/preferred_replica_election
> 2018-05-23 07:40:39,240 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
> processing sessionid:0x200b4f13c190006 type:create cxid:0x20 zxid:0x20052 
> txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists 
> for /brokers
> 2018-05-23 07:40:39,240 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
> processing sessionid:0x200b4f13c190006 type:create cxid:0x21 zxid:0x20053 
> txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = 
> NodeExists for /brokers/ids
> 2018-05-23 07:41:00,864 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
> processing sessionid:0x301cae0b3480004 type:create cxid:0x20 zxid:0x20058 
> txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists 
> for /brokers
> 2018-05-23 07:41:00,864 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
> processing sessionid:0x301cae0b3480004 type:create cxid:0x21 zxid:0x20059 
> txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = 
> NodeExists for /brokers/ids
> 2018-05-23 07:41:28,456 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@487] - Processed session termination for 
> sessionid: 0x200b4f13c190002
> 2018-05-23 07:41:29,563 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@487] - Processed session termination for 
> sessionid: 0x301cae0b3480002
> 2018-05-23 07:41:29,569 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
> processing sessionid:0x200b4f13c190006 type:create cxid:0x2d zxid:0x2005f 
> txntype:-1 reqpath:n/a Error Path:/controller Error:KeeperErrorCode = 
> NodeExists for /controller
> 2018-05-23 07:41:29,679 [myid:1] - INFO  [ProcessThread(sid:1 
> cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
> processing sessionid:0x301cae0b3480004 type:delete cxid:0x4e zxid:0x20061 
> txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election 
> Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
>  
>  
> Kafka:- 
>  
> [2018-05-23 09:06:31,969] ERROR [ReplicaFetcherThread-0-1]: Error for 
> partition [23MAY,0] to broker 
> 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
>  
>  
>  
> ERROR [ReplicaFetcherThread-0-2]: Current offset 142474 for partition 
> [23MAY,1] out of range; reset offset to 142478 
> (kafka.server.ReplicaFetcherThread)
>  
>  
> ERROR [ReplicaFetcherThread-0-2]: Error for partition [23MAY,2] to broker 
> 2:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server 
> is not the leader for that topic-partition. 
> (kafka.server.ReplicaFetcherThread)
>  
>  
>  
> Below are my configuration:- 
>  
>  
> Zookeeper:- 
>  
>  java.env
> SERVER_JVMFLAGS="-Djava.security.auth.login.config=/usr/local/zookeeper/conf/ZK_jaas.conf"
>  
>  
> ZK_jaas.conf
> Server
>  
> { org.apache.zookeeper.server.auth.DigestLoginModule required
>   username="admin"
>   password="admin-secret"
>   user_admin="admin-secret";
>  };
>  
> QuorumServer {
>        org.apache.zookeeper.server.auth.DigestLoginModule required
>        user_test="test";

[jira] [Resolved] (KAFKA-7090) Zookeeper client setting in server-properties

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7090.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Zookeeper client setting in server-properties
> -
>
> Key: KAFKA-7090
> URL: https://issues.apache.org/jira/browse/KAFKA-7090
> Project: Kafka
>  Issue Type: New Feature
>  Components: config, documentation
>Reporter: Christian Tramnitz
>Priority: Minor
>
> There are several Zookeeper client settings that may be used to connect to ZK.
> Currently, it seems only very few zookeeper.* settings are supported in 
> Kafka's server.properties file. Wouldn't it make sense to support all 
> zookeeper client settings there or where would that need to go?
> I.e. for using Zookeeper 3.5 with TLS enabled, the following properties are 
> required:
> zookeeper.clientCnxnSocket
> zookeeper.client.secure
> zookeeper.ssl.keyStore.location
> zookeeper.ssl.keyStore.password
> zookeeper.ssl.trustStore.location
> zookeeper.ssl.trustStore.password
> It's obviously possible to pass them through "-D", but especially for the 
> keystore password, I'd be more comfortable with this sitting in the 
> properties file than being visible in the process list...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-4418) Broker Leadership Election Fails If Missing ZK Path Raises Exception

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-4418.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Broker Leadership Election Fails If Missing ZK Path Raises Exception
> 
>
> Key: KAFKA-4418
> URL: https://issues.apache.org/jira/browse/KAFKA-4418
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Michael Pedersen
>Priority: Major
>  Labels: reliability
>
> Our Kafka cluster went down because a single node went down *and* a path in 
> Zookeeper was missing for one topic (/brokers/topics//partitions). 
> When this occurred, leadership election could not run, and produced a stack 
> trace that looked like this:
> Failed to start preferred replica election
> org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/warandpeace/partitions
>   at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:995)
>   at org.I0Itec.zkclient.ZkClient.getChildren(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.getChildren(ZkClient.java:671)
>   at kafka.utils.ZkUtils.getChildren(ZkUtils.scala:537)
>   at 
> kafka.utils.ZkUtils$$anonfun$getAllPartitions$1.apply(ZkUtils.scala:817)
>   at 
> kafka.utils.ZkUtils$$anonfun$getAllPartitions$1.apply(ZkUtils.scala:816)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.utils.ZkUtils.getAllPartitions(ZkUtils.scala:816)
>   at 
> kafka.admin.PreferredReplicaLeaderElectionCommand$.main(PreferredReplicaLeaderElectionCommand.scala:64)
>   at 
> kafka.admin.PreferredReplicaLeaderElectionCommand.main(PreferredReplicaLeaderElectionCommand.scala)
> Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for /brokers/topics/warandpeace/partitions
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
>   at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1500)
>   at org.I0Itec.zkclient.ZkConnection.getChildren(ZkConnection.java:114)
>   at org.I0Itec.zkclient.ZkClient$4.call(ZkClient.java:678)
>   at org.I0Itec.zkclient.ZkClient$4.call(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:985)
>   ... 16 more
> I have checked through the code a bit, and have found a quick place to 
> introduce a fix that would seem to allow the leadership election to continue. 
> Specifically, the function at 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala#L633
>  does not handle possible exceptions. Wrapping a try/catch block here would 
> work, but could introduce a number of other problems:
> * If the code is used elsewhere, the exception might be needed at a higher 
> level to prevent something else.
> * Unless the exception is logged/reported somehow, no one will know this 
> problem exists, which makes debugging other problems harder.
> I'm sure there are other issues I'm not aware of, but those two come to mind 
> quickly. What would be the best route for getting this resolved quickly?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-3685) Auto-generate ZooKeeper data structure wiki

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-3685.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Auto-generate ZooKeeper data structure wiki
> ---
>
> Key: KAFKA-3685
> URL: https://issues.apache.org/jira/browse/KAFKA-3685
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
>
> The ZooKeeper data structure wiki page is located at 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper.
>  This should be auto-generated and versioned according to various releases. A 
> similar auto-generate has been previously done for protocol. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-1155) Kafka server can miss zookeeper watches during long zkclient callbacks

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-1155.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Kafka server can miss zookeeper watches during long zkclient callbacks
> --
>
> Key: KAFKA-1155
> URL: https://issues.apache.org/jira/browse/KAFKA-1155
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0, 0.8.1, 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Critical
>  Labels: newbie++
>
> On getting a zookeeper watch, zkclient invokes the blocking user callback and 
> only re-registers the watch after the callback returns. This leaves a 
> possibly large window of time when Kafka has not registered for watches on 
> the desired zookeeper paths and hence can miss important state changes (on 
> the controller). In any case, it is worth noting that even though zookeeper 
> has a read-and-set-watch API, there can always be a window of time between 
> the watch being fired, the callback and the read-and-set-watch API call. Due 
> to the zkclient wrapper, it is difficult to handle this properly in the Kafka 
> code unless we directly use the zookeeper client. One way of getting around 
> this issue is to use timestamps on the paths and when a watch fires, check if 
> the timestamp in zk is different from the one in the callback handler.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-6062) Reduce topic partition count in kafka version 0.10.0.0

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-6062.
---
Resolution: Cannot Reproduce

> Reduce topic partition count in kafka version 0.10.0.0
> --
>
> Key: KAFKA-6062
> URL: https://issues.apache.org/jira/browse/KAFKA-6062
> Project: Kafka
>  Issue Type: Task
>Reporter: Balu
>Priority: Major
>
> Using  kafka,zookeeper,schema repository cluster. Current partition count is 
> 10 and have to make it 3. Can we do it?
> Appreciate steps if dataloss is fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-795) Improvements to PreferredReplicaLeaderElection tool

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-795.
--
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Improvements to PreferredReplicaLeaderElection tool
> ---
>
> Key: KAFKA-795
> URL: https://issues.apache.org/jira/browse/KAFKA-795
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Major
>
> We can make some improvements to the PreferredReplicaLeaderElection tool:
> 1. Terminate the tool if a controller is not up and running. Currently we can 
> run the tool without having any broker running, which is kind of confusing. 
> 2. Should we delete /admin zookeeper path in PreferredReplicaLeaderElection 
> (and ReassignPartition) tool at the end? Otherwise the next run of the tool 
> complains that a replica election is already in progress. 
> 3. If there is an error, we can see it in cotroller.log. Should the tool also 
> throw an error?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7349) Long Disk Writes cause Zookeeper Disconnects

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7349.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Long Disk Writes cause Zookeeper Disconnects
> 
>
> Key: KAFKA-7349
> URL: https://issues.apache.org/jira/browse/KAFKA-7349
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.11.0.1
>Reporter: Adam Kafka
>Priority: Minor
> Attachments: SpikeInWriteTime.png
>
>
> We run our Kafka cluster on a cloud service provider. As a consequence, we 
> notice a large tail latency write time that is out of our control. Some 
> writes take on the order of seconds. We have noticed that often these long 
> write times are correlated with subsequent Zookeeper disconnects from the 
> brokers. It appears that during the long write time, the Zookeeper heartbeat 
> thread does not get scheduled CPU time, resulting in a long gap of heartbeats 
> sent. After the write, the ZK thread does get scheduled CPU time, but it 
> detects that it has not received a heartbeat from Zookeeper in a while, so it 
> drops its connection then rejoins the cluster.
> Note that the timeout reported is inconsistent with the timeout as set by the 
> configuration ({{zookeeper.session.timeout.ms}} = default of 6 seconds). We 
> have seen a range of values reported here, including 5950ms (less than 
> threshold), 12032ms (double the threshold), 25999ms (much larger than the 
> threshold).
> We noticed that during a service degradation of the storage service of our 
> cloud provider, these Zookeeper disconnects increased drastically in 
> frequency. 
> We are hoping there is a way to decouple these components. Do you agree with 
> our diagnosis that the ZK disconnects are occurring due to thread contention 
> caused by long disk writes? Perhaps the ZK thread could be scheduled at a 
> higher priority? Do you have any suggestions for how to avoid the ZK 
> disconnects?
> Here is an example of one of these events:
> Logs on the Broker:
> {code}
> [2018-08-25 04:10:19,695] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:21,697] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:23,700] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:25,701] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:27,702] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:29,704] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:31,707] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:33,709] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:35,712] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:37,714] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:39,716] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:41,719] DEBUG Got ping response for sessionid: 
> 0x36202ab4337002c after 1ms (org.apache.zookeeper.ClientCnxn)
> ...
> [2018-08-25 04:10:53,752] WARN Client session timed out, have not heard from 
> server in 12032ms for sessionid 0x36202ab4337002c 
> (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:53,754] INFO Client session timed out, have not heard from 
> server in 12032ms for sessionid 0x36202ab4337002c, closing socket connection 
> and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> [2018-08-25 04:10:53,920] INFO zookeeper state changed (Disconnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2018-08-25 04:10:53,920] INFO Waiting for keeper state SyncConnected 
> (org.I0Itec.zkclient.ZkClient)
> ...
> {code}
> GC logs during the same time (demonstrating this is not just a long GC):
> {code}
> 2018-08-25T04:10:36.434+: 35150.779: [GC (Allocation Failure)  
> 3074119K->2529089K(6223360K), 0.0137342 secs]
> 2018-08-25T04:10:37.367+: 35151.713: [GC (Allocation Failure)  
> 3074433K->2528524K(6223360K), 0.0127938 secs]
> 2018-08-25T04:10:38.274+: 35152.620: [GC (Allocation Failure)  
> 3073868K->2528357K(6223360K), 0.0131040 secs]
> 2018-08-25T04:10:39.220+: 35153.566: [GC (Allocation Failure)

[jira] [Resolved] (KAFKA-14234) /admin/delete_topics is not in the list of zookeeper watchers

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14234.

Resolution: Cannot Reproduce

We are now removing ZooKeeper support so closing this issue.

> /admin/delete_topics is not in the list of zookeeper watchers
> -
>
> Key: KAFKA-14234
> URL: https://issues.apache.org/jira/browse/KAFKA-14234
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 3.2.1
>Reporter: Yan Xue
>Priority: Minor
>
> I deployed the Kafka cluster on Kuberentes and am trying to figure out how 
> topic deletion works. I know Kafka controller has the topic deletion manager 
> which watches the node change in the zookeeper. Whenever a topic is deleted, 
> the manager is triggered. I expected to see that the {{/admin/delete_topics}} 
> is in the watcher list. However, I didn't find it. Sample output:
> root@kafka-broker-2:/opt/kafka# echo wchc | nc ZOOKEEPER_IP 2181
> 0x20010021139
>     /admin/preferred_replica_election
>     /brokers/ids/0
>     /brokers/ids/1
>     /brokers/ids/2
>     /brokers/topics/__consumer_offsets
>     /brokers/ids/3
>     /brokers/ids/4
>     /controller
>     /admin/reassign_partitions
>     /brokers/topics/test-test
>     /feature
> 0x200100211390001
>     /controller
>     /feature
> 0x1631f9
>     /controller
>     /feature
>  
> Even though I can delete the topic, I am confused about the output.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13774) AclAuthorizer should handle it a bit more gracefully if zookeeper.connect is null

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13774.

Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> AclAuthorizer should handle it a bit more gracefully if zookeeper.connect is 
> null
> -
>
> Key: KAFKA-13774
> URL: https://issues.apache.org/jira/browse/KAFKA-13774
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Priority: Minor
>  Labels: kip-500
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8524) Zookeeper Acl Sensitive Path Extension

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8524.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Zookeeper Acl Sensitive Path Extension
> --
>
> Key: KAFKA-8524
> URL: https://issues.apache.org/jira/browse/KAFKA-8524
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 1.1.0, 2.2.1
>Reporter: sebastien diaz
>Priority: Major
>  Labels: path, zkcli, zookeeper
>
> There is too more readable config in Zookeeper as /brokers,/controller, 
> /kafka-acl, .
> As Zookeeper can be accessed by other projects/users , the security should be 
> extended to Zookeeper ACL properly.
> We shoudl have the possibility to set these paths by configuration and not 
> (as it is today) in the code.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8707) Zookeeper Session expired either before or while waiting for connection

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8707.
---
Resolution: Duplicate

> Zookeeper Session expired either before or while waiting for connection
> ---
>
> Key: KAFKA-8707
> URL: https://issues.apache.org/jira/browse/KAFKA-8707
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 2.0.1
>Reporter: Chethan Bheemaiah
>Priority: Major
>
> Recently we had encountered an issue in one of our kafka cluster. One of the 
> node went down and was not joining the kafka cluster on restart. We had 
> observed Session expired error messages in server.log
> Below is one message
> ERROR kafka.common.ZkNodeChangeNotificationListener: Error while processing 
> notification change for path = /config/changes
> kafka.zookeeper.ZooKeeperClientExpiredException: Session expired either 
> before or while waiting for connection
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:238)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at 
> kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:226)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:220)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$waitUntilConnected$1.apply(ZooKeeperClient.scala:220)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$waitUntilConnected$1.apply(ZooKeeperClient.scala:220)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at 
> kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:219)
> at 
> kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1510)
> at 
> kafka.zk.KafkaZkClient.kafka$zk$KafkaZkClient$$retryRequestUntilConnected(KafkaZkClient.scala:1486)
> at kafka.zk.KafkaZkClient.getChildren(KafkaZkClient.scala:585)
> at 
> kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:82)
> at 
> kafka.common.ZkNodeChangeNotificationListener$ChangeNotification.process(ZkNodeChangeNotificationListener.scala:119)
> at 
> kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread.doWork(ZkNodeChangeNotificationListener.scala:145)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8708) Zookeeper Session expired either before or while waiting for connection

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8708.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Zookeeper Session expired either before or while waiting for connection
> ---
>
> Key: KAFKA-8708
> URL: https://issues.apache.org/jira/browse/KAFKA-8708
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 2.0.1
>Reporter: Chethan Bheemaiah
>Priority: Major
>
> Recently we had encountered an issue in one of our kafka cluster. One of the 
> node went down and was not joining the kafka cluster on restart. We had 
> observed Session expired error messages in server.log
> Below is one message
> ERROR kafka.common.ZkNodeChangeNotificationListener: Error while processing 
> notification change for path = /config/changes
> kafka.zookeeper.ZooKeeperClientExpiredException: Session expired either 
> before or while waiting for connection
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:238)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at 
> kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:226)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:220)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$waitUntilConnected$1.apply(ZooKeeperClient.scala:220)
> at 
> kafka.zookeeper.ZooKeeperClient$$anonfun$waitUntilConnected$1.apply(ZooKeeperClient.scala:220)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at 
> kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:219)
> at 
> kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1510)
> at 
> kafka.zk.KafkaZkClient.kafka$zk$KafkaZkClient$$retryRequestUntilConnected(KafkaZkClient.scala:1486)
> at kafka.zk.KafkaZkClient.getChildren(KafkaZkClient.scala:585)
> at 
> kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:82)
> at 
> kafka.common.ZkNodeChangeNotificationListener$ChangeNotification.process(ZkNodeChangeNotificationListener.scala:119)
> at 
> kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread.doWork(ZkNodeChangeNotificationListener.scala:145)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8935) Please update zookeeper in the next release

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8935.
---
Resolution: Invalid

Kafka currently uses ZooKeeper 3.8.4, closing.

> Please update zookeeper in the next release
> ---
>
> Key: KAFKA-8935
> URL: https://issues.apache.org/jira/browse/KAFKA-8935
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Agostino Sarubbo
>Priority: Major
>
> Please update zookeeper in the next release. Atm, 2.3.0 ships 
> zookeeper-3.4.14 that does not support ssl. Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7546) Java implementation for Authorizer

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7546.
---
Resolution: Not A Problem

The org.apache.kafka.server.authorizer.Authorizer java interface exists to 
implement custom authorizers that don't depend on ZooKeeper, closing.

> Java implementation for Authorizer
> --
>
> Key: KAFKA-7546
> URL: https://issues.apache.org/jira/browse/KAFKA-7546
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Pradeep Bansal
>Priority: Major
> Attachments: AuthorizerImpl.PNG
>
>
> I am using kafka with authentication and authorization. I wanted to plugin my 
> own implementation of Authorizer which doesn't use zookeeper instead has 
> permission mapping in SQL database. Is it possible to write Authorizer code 
> in Java?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-9147) zookeeper service not running

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9147.
---
Resolution: Invalid

This is not a Kafka issue, closing.

> zookeeper service not running 
> --
>
> Key: KAFKA-9147
> URL: https://issues.apache.org/jira/browse/KAFKA-9147
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 2.3.0
> Environment: Ubuntu
>Reporter: parimal
>Priority: Major
>
> i was able to start zookeeper service on stand alone Ubuntu using the command
>  
> root@N-5CG73531RZ:/# /usr/local/zookeeper/bin/zkServer.sh start
> /usr/bin/java
> ZooKeeper JMX enabled by default
> Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
> Starting zookeeper ... STARTED
>  
> However when i do ps -ef I dont see any zookeeper service running 
>  
> root@N-5CG73531RZ:/# ps -ef
> UID PID PPID C STIME TTY TIME CMD
> root 1 0 0 Nov04 ? 00:00:00 /init
> root 5 1 0 Nov04 tty1 00:00:00 /init
> pgarg00 6 5 0 Nov04 tty1 00:00:00 -bash
> root 2861 6 0 Nov04 tty1 00:00:00 sudo -i
> root 2862 2861 0 Nov04 tty1 00:00:03 -bash
> root 5347 1 0 18:24 ? 00:00:00 /usr/sbin/sshd
> root 5367 1 0 18:25 ? 00:00:00 /usr/sbin/inetd
> root 8950 2862 0 19:15 tty1 00:00:00 ps -ef
>  
> Also when I do telnet , connection is refused 
> root@N-5CG73531RZ:/# telnet localhost 2181
> Trying 127.0.0.1...
> telnet: Unable to connect to remote host: Connection refused
>  
> can you plz help me ?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7721) Connection to zookeeper refused

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7721.
---
Resolution: Cannot Reproduce

We are now removing ZooKeeper support so closing this issue.

> Connection to zookeeper refused
> ---
>
> Key: KAFKA-7721
> URL: https://issues.apache.org/jira/browse/KAFKA-7721
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
> Environment: Dockerized containers - kubernetes 1.9
>Reporter: Mohammad Etemad
>Priority: Major
>
> Kafka throws exception when trying to connect to zookeeper. This happens when 
> zookeeper connection is lost and recovered. Kafka seems to be stuck in a loop 
> that cannot renew the connection. Here are the logs:
> 2018-12-11 13:52:52,905] INFO Opening socket connection to server 
> zookeeper-0.zookeeper.logging.svc.cluster.local/10.38.128.12:2181. Will not 
> attempt to authenticate using SASL (unknown error) 
> (org.apache.zookeeper.ClientCnxn)
> [2018-12-11 13:52:52,906] WARN Session 0x1001443ad77000f for server null, 
> unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> java.net.ConnectException: Connection refused
>  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>  at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
> On the zookeeper side it can be seen that kafka connection is established. 
> Here are the logs:
> 2018-12-11 13:53:44,969 [myid:] - INFO 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - 
> Accepted socket connection from /10.38.128.8:46066
> 2018-12-11 13:53:44,976 [myid:] - INFO 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@938] - Client 
> attempting to establish new session at /10.38.128.8:46066
> 2018-12-11 13:53:45,005 [myid:] - INFO [SyncThread:0:ZooKeeperServer@683] - 
> Established session 0x10060ff12a58dc0 with negotiated timeout 3 for 
> client /10.38.128.8:46066
> 2018-12-11 13:53:45,071 [myid:] - INFO [ProcessThread(sid:0 
> cport:2181)::PrepRequestProcessor@487] - Processed session termination for 
> sessionid: 0x10060ff12a58dc0
> 2018-12-11 13:53:45,077 [myid:] - INFO 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed 
> socket connection for client /10.38.128.8:46066 which had sessionid 
> 0x10060ff12a58dc0
> 2018-12-11 13:53:47,119 [myid:] - INFO 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - 
> Accepted socket connection from /10.36.0.8:48798
> 2018-12-11 13:53:47,124 [myid:] - INFO 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@938] - Client 
> attempting to establish new session at /10.36.0.8:48798
> 2018-12-11 13:53:47,134 [myid:] - INFO [SyncThread:0:ZooKeeperServer@683] - 
> Established session 0x10060ff12a58dc1 with negotiated timeout 3 for 
> client /10.36.0.8:48798
> 2018-12-11 13:53:47,582 [myid:] - INFO [ProcessThread(sid:0 
> cport:2181)::PrepRequestProcessor@487] - Processed session termination for 
> sessionid: 0x10060ff12a58dc1
> 2018-12-11 13:53:47,592 [myid:] - INFO 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed 
> socket connection for client /10.36.0.8:48798 which had sessionid 
> 0x10060ff12a58dc1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-6584) Session expiration concurrent with ZooKeeper leadership failover may lead to broker registration failure

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-6584.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Session expiration concurrent with ZooKeeper leadership failover may lead to 
> broker registration failure
> 
>
> Key: KAFKA-6584
> URL: https://issues.apache.org/jira/browse/KAFKA-6584
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 1.0.0
>Reporter: Chris Thunes
>Priority: Major
>
> It seems that an edge case exists which can lead to sessions "un-expiring" 
> during a ZooKeeper leadership failover. Additional details can be found in 
> ZOOKEEPER-2985.
> This leads to a NODEXISTS error when attempting to re-create the ephemeral 
> brokers/ids/\{id} node in ZkUtils.registerBrokerInZk. We experienced this 
> issue on each node within a 3-node Kafka cluster running 1.0.0. All three 
> nodes continued running (producers and consumers appeared unaffected), but 
> none of the nodes were considered online and partition leadership could be 
> not re-assigned.
> I took a quick look at trunk and I believe the issue is still present, but 
> has moved into KafkaZkClient.checkedEphemeralCreate which will [raise an 
> error|https://github.com/apache/kafka/blob/90e0bbe/core/src/main/scala/kafka/zk/KafkaZkClient.scala#L1512]
>  when it finds that the broker/ids/\{id} node exists, but belongs to the old 
> (believed expired) session.
>  
> *NOTE:* KAFKA-7165 introduce a workaround to cope with the case described 
> here. We decided to keep this issue open to track the ZOOKEEPER-2985 status.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7122) Data is lost when ZooKeeper times out

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7122.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> Data is lost when ZooKeeper times out
> -
>
> Key: KAFKA-7122
> URL: https://issues.apache.org/jira/browse/KAFKA-7122
> Project: Kafka
>  Issue Type: Bug
>  Components: core, replication
>Affects Versions: 0.11.0.2
>Reporter: Nick Lipple
>Priority: Blocker
>
> Noticed that a kafka cluster will lose data when a leader for a partition has 
> their zookeeper connection timeout.
> Sequence of events:
>  # Say broker A leads a partition followed by brokers B and C
>  # A ZK node has a network issue, happens to be the node used by broker A. 
> Lets say this happens at offset X
>  # Kafka Controller immediately selects broker C as the new partition leader
>  # Broker A does not timeout from zookeeper for another 4 seconds. Broker A 
> still thinks it is the leader, presumably accepting producer writes.
>  # Broker A detects the ZK timeout and leaves the ISR.
>  # Broker A reconnects to ZK, rejoins cluster as follower for partition
>  # Broker A truncates log to some offset Y such that Y > X. Broker A proceeds 
> to catch up normally and becomes an ISR
>  # ISRs for partition are now in an inconsistent state:
>  ## Broker C has all offsets X through Y plus everything after
>  ## Broker B has all offsets X through Y plus everything after
>  ## Broker A has offsets up to X and after Y. Everything between X and Y *IS 
> MISSING*
>  # Within 5 minutes, controller trigger preferred replica election making 
> Broker A the new leader for partition (this is default behavior)
> All consumers after step 9 will not receive any messages for offsets between 
> X and Y.
>  
> The root problem here seems to be broker A truncates to offset Y when 
> rejoining the cluster. It should be truncating further back to offset X to 
> prevent data loss
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-5885) NPE in ZKClient

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-5885.
---
Resolution: Won't Fix

We are now removing ZooKeeper support so closing this issue.

> NPE in ZKClient
> ---
>
> Key: KAFKA-5885
> URL: https://issues.apache.org/jira/browse/KAFKA-5885
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.10.2.1
>Reporter: Dustin Cote
>Priority: Major
>
> A null znode for a topic (reason how this happen isn't totally clear, but not 
> the focus of this issue) can currently cause controller leader election to 
> fail. When looking at the broker logging, you can see there is a 
> NullPointerException emanating from the ZKClient:
> {code}
> [2017-09-11 00:00:21,441] ERROR Error while electing or becoming leader on 
> broker 1010674 (kafka.server.ZookeeperLeaderElector)
> kafka.common.KafkaException: Can't parse json string: null
> at kafka.utils.Json$.liftedTree1$1(Json.scala:40)
> at kafka.utils.Json$.parseFull(Json.scala:36)
> at 
> kafka.utils.ZkUtils$$anonfun$getReplicaAssignmentForTopics$1.apply(ZkUtils.scala:704)
> at 
> kafka.utils.ZkUtils$$anonfun$getReplicaAssignmentForTopics$1.apply(ZkUtils.scala:700)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.utils.ZkUtils.getReplicaAssignmentForTopics(ZkUtils.scala:700)
> at 
> kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:742)
> at 
> kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:333)
> at 
> kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:160)
> at 
> kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:85)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:154)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:154)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:154)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:153)
> at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:825)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:72)
> Caused by: java.lang.NullPointerException
> {code}
> Regardless of how a null topic znode ended up in ZooKeeper, we can probably 
> handle this better, at least by printing the path up to the problematic znode 
> in the log. The way this particular problem was resolved was by using the 
> ``kafka-topics`` command and seeing it persistently fail trying to read a 
> particular topic with this same message. Then deleting the null znode allowed 
> the leader election to complete.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-3287) Add over-wire encryption support between KAFKA and ZK

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-3287.
---
Resolution: Duplicate

TLS between Kafka and ZooKeeper has been supported for many years, closing.

> Add over-wire encryption support between KAFKA and ZK
> -
>
> Key: KAFKA-3287
> URL: https://issues.apache.org/jira/browse/KAFKA-3287
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Major
>
> ZOOKEEPER-2125 added support for SSL. After Kafka upgrades ZK's dependency to 
> 3.5.1+ or 3.6.0+, SSL support between kafka broker and zk can be added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-3288) Update ZK dependency to 3.5.1 when it is marked as stable

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-3288.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Update ZK dependency to 3.5.1 when it is marked as stable
> -
>
> Key: KAFKA-3288
> URL: https://issues.apache.org/jira/browse/KAFKA-3288
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Major
>
> When a stable version of ZK 3.5.1+ is released, update Kafka's ZK dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-1918) System test for ZooKeeper quorum failure scenarios

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-1918.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> System test for ZooKeeper quorum failure scenarios
> --
>
> Key: KAFKA-1918
> URL: https://issues.apache.org/jira/browse/KAFKA-1918
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Omid Aladini
>Priority: Major
>
> Following up on the [conversation on the mailing 
> list|http://mail-archives.apache.org/mod_mbox/kafka-users/201502.mbox/%3CCAHwHRrX3SAWDUGF5LjU4rrMUsqv%3DtJcyjX7OENeL5C_V5o3tCw%40mail.gmail.com%3E],
>  the FAQ writes:
> {quote}
> Once the Zookeeper quorum is down, brokers could result in a bad state and 
> could not normally serve client requests, etc. Although when Zookeeper quorum 
> recovers, the Kafka brokers should be able to resume to normal state 
> automatically, _there are still a few +corner cases+ the they cannot and a 
> hard kill-and-recovery is required to bring it back to normal_. Hence it is 
> recommended to closely monitor your zookeeper cluster and provision it so 
> that it is performant.
> {quote}
> As ZK quorum failures are inevitable (due to rolling upgrades of ZK, leader 
> hardware failure, etc), it would be great to identify the corner cases (if 
> they still exist) and fix them if necessary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-2762) Add a log appender for zookeeper in the log4j.properties file

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-2762.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Add a log appender for zookeeper in the log4j.properties file
> -
>
> Key: KAFKA-2762
> URL: https://issues.apache.org/jira/browse/KAFKA-2762
> Project: Kafka
>  Issue Type: Task
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Major
>
> Log4j.properties file is present under config directory. Right now, we are 
> using this log4j file from the daemon scripts. Ex: kaka-server-start.sh & 
> zookeeper-start.sh scripts. I am not seeing any log appender for zookeeper 
> daemon in the log4j properties.
>  *IMO, we should add a log appender for zookeeper in the log4j properties 
> file to redirect zookeeper logs to different file.*
> Zookeeper logs will be printed on console if we use the existing the log4j 
> properties file. I agree that users can still update log4j file but it would 
> be nice to keep the appender so that every user does not need to modify the 
> log4j file.
> I have recently started using kafka. Could anyone please me add as a 
> contributor to this project?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-2404) Delete config znode when config values are empty

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-2404.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Delete config znode when config values are empty
> 
>
> Key: KAFKA-2404
> URL: https://issues.apache.org/jira/browse/KAFKA-2404
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Major
>
> Jun's comment from KAFKA-2205:
> "Currently, if I add client config and then remove it, the clientid still 
> shows up during describe, but with empty config values. We probably should 
> delete the path when there is no overwritten values. Could you do that in a 
> follow up patch?
> bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type client 
> --describe 
> Configs for client:client1 are"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-2204) Dynamic Configuration via ZK

2024-10-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-2204.
---
Resolution: Won't Do

We are now removing ZooKeeper support so closing this issue.

> Dynamic Configuration via ZK
> 
>
> Key: KAFKA-2204
> URL: https://issues.apache.org/jira/browse/KAFKA-2204
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Major
>
> Parent ticket to track all jiras for dynamic configuration via zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17746) Replace JavaConverters with CollectionConverters

2024-10-09 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17746:
--

 Summary: Replace JavaConverters with CollectionConverters
 Key: KAFKA-17746
 URL: https://issues.apache.org/jira/browse/KAFKA-17746
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison


scala.collection.JavaConverters is deprecated, since Scala 2.13 we can use 
scala.jdk.CollectionConverters instead



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17730) ReplicaFetcherThreadBenchmark is broken

2024-10-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17730:
--

 Summary: ReplicaFetcherThreadBenchmark is broken
 Key: KAFKA-17730
 URL: https://issues.apache.org/jira/browse/KAFKA-17730
 Project: Kafka
  Issue Type: Bug
Reporter: Mickael Maison


Output running this benchmark:



{noformat}
./jmh.sh ReplicaFetcherThreadBenchmark
running gradlew :jmh-benchmarks:clean :jmh-benchmarks:shadowJar

> Configure project :
Starting build with version 4.0.0-SNAPSHOT (commit id 15594bbb) using Gradle 
8.10, Java 17 and Scala 2.13.15
Build properties: ignoreFailures=false, maxParallelForks=12, 
maxScalacThreads=8, maxTestRetries=0

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.10/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD SUCCESSFUL in 31s
96 actionable tasks: 22 executed, 74 up-to-date
gradle build done
running JMH with args: ReplicaFetcherThreadBenchmark
# JMH version: 1.37
# VM version: JDK 17.0.9, OpenJDK 64-Bit Server VM, 17.0.9+9
# VM invoker: /Users/mickael/.sdkman/candidates/java/17.0.9-tem/bin/java
# VM options: 
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false 
to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 15 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: 
org.apache.kafka.jmh.fetcher.ReplicaFetcherThreadBenchmark.testFetcher
# Parameters: (partitionCount = 100)

# Run progress: 0.00% complete, ETA 00:13:20
# Fork: 1 of 1
# Warmup Iteration   1: [2024-10-08 17:32:04,626] WARN The new 'consumer' 
rebalance protocol is only supported in KRaft cluster with the new group 
coordinator. (kafka.server.KafkaConfig:70)


org.apache.kafka.common.errors.NotLeaderOrFollowerException: Error while 
fetching partition state for topic-30




Non-finished threads:

Thread[ExpirationReaper--1-ElectLeader,5,main]
  at java.base@17.0.9/jdk.internal.misc.Unsafe.park(Native Method)
  at 
java.base@17.0.9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
  at 
java.base@17.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1672)
  at java.base@17.0.9/java.util.concurrent.DelayQueue.poll(DelayQueue.java:265)
  at 
app//org.apache.kafka.server.util.timer.SystemTimer.advanceClock(SystemTimer.java:90)
  at 
app//kafka.server.DelayedOperationPurgatory.advanceClock(DelayedOperation.scala:418)
  at 
app//kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper.doWork(DelayedOperation.scala:444)
  at 
app//org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:136)

Thread[ExpirationReaper--1-RemoteFetch,5,main]
  at java.base@17.0.9/jdk.internal.misc.Unsafe.park(Native Method)
  at 
java.base@17.0.9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
  at 
java.base@17.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1672)
  at java.base@17.0.9/java.util.concurrent.DelayQueue.poll(DelayQueue.java:265)
  at 
app//org.apache.kafka.server.util.timer.SystemTimer.advanceClock(SystemTimer.java:90)
  at 
app//kafka.server.DelayedOperationPurgatory.advanceClock(DelayedOperation.scala:418)
  at 
app//kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper.doWork(DelayedOperation.scala:444)
  at 
app//org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:136)

Thread[ExpirationReaper--1-Fetch,5,main]
  at java.base@17.0.9/jdk.internal.misc.Unsafe.park(Native Method)
  at 
java.base@17.0.9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
  at 
java.base@17.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1672)
  at java.base@17.0.9/java.util.concurrent.DelayQueue.poll(DelayQueue.java:265)
  at 
app//org.apache.kafka.server.util.timer.SystemTimer.advanceClock(SystemTimer.java:90)
  at 
app//kafka.server.DelayedOperationPurgatory.advanceClock(DelayedOperation.scala:418)
  at 
app//kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper.doWork(DelayedOperation.scala:444)
  at 
app//org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:136)

Thread[ExpirationReaper--1-DeleteRecords,5,main]
  at java.base@17.0.9/jdk.internal.misc.Unsafe.park(Native Method)
  at 
java.base@17.0.9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
  at 
java.base@17.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchron

[jira] [Created] (KAFKA-17729) Remove ZooKeeper from jmh-benchmarks

2024-10-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17729:
--

 Summary: Remove ZooKeeper from jmh-benchmarks
 Key: KAFKA-17729
 URL: https://issues.apache.org/jira/browse/KAFKA-17729
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14158) KIP-830 cleanups for Kafka 4.0

2024-10-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14158.

Resolution: Fixed

> KIP-830 cleanups for Kafka 4.0
> --
>
> Key: KAFKA-14158
> URL: https://issues.apache.org/jira/browse/KAFKA-14158
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 4.0.0
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 4.0.0
>
>
> KIP-830 introduced a few changes that should be made in Kafka 4.0:
> - Remove the auto.include.jmx.reporter configuration
> - Update metric.reporters default value to be 
> org.apache.kafka.common.metrics.JmxReporter



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17679) Remove kafka.security.authorizer.AclAuthorizer from AclCommand

2024-10-04 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17679.

Resolution: Duplicate

> Remove kafka.security.authorizer.AclAuthorizer from AclCommand
> --
>
> Key: KAFKA-17679
> URL: https://issues.apache.org/jira/browse/KAFKA-17679
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17692) Remove KafkaServer references in streams tests

2024-10-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17692:
--

 Summary: Remove KafkaServer references in streams tests
 Key: KAFKA-17692
 URL: https://issues.apache.org/jira/browse/KAFKA-17692
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17691) Remove KafkaServer references in tools tests

2024-10-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17691:
--

 Summary: Remove KafkaServer references in tools tests
 Key: KAFKA-17691
 URL: https://issues.apache.org/jira/browse/KAFKA-17691
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17679) Remove kafka.security.authorizer.AclAuthorizer from AclCommand

2024-10-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17679:
--

 Summary: Remove kafka.security.authorizer.AclAuthorizer from 
AclCommand
 Key: KAFKA-17679
 URL: https://issues.apache.org/jira/browse/KAFKA-17679
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17662) config.providers configuration missing from the docs

2024-09-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17662:
--

 Summary: config.providers configuration missing from the docs
 Key: KAFKA-17662
 URL: https://issues.apache.org/jira/browse/KAFKA-17662
 Project: Kafka
  Issue Type: Bug
Reporter: Mickael Maison


The config.providers configuration is only listed in the Connect configuration 
documentation. Since it's usable by all components, it should be listed in all 
the configuration sections.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16188) Delete deprecated kafka.common.MessageReader

2024-09-05 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16188.

Resolution: Fixed

> Delete deprecated kafka.common.MessageReader
> 
>
> Key: KAFKA-16188
> URL: https://issues.apache.org/jira/browse/KAFKA-16188
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 4.0.0
>
>
> [KIP-641|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866569]
>  introduced org.apache.kafka.tools.api.RecordReader and deprecated 
> kafka.common.MessageReader in Kafka 3.5.0.
> We should delete kafka.common.MessageReader in Kafka 4.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17468) Move kafka.log.remote.quota to storage module

2024-09-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17468:
--

 Summary: Move kafka.log.remote.quota to storage module
 Key: KAFKA-17468
 URL: https://issues.apache.org/jira/browse/KAFKA-17468
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17430) Move RequestChannel.Metrics and RequestChannel.RequestMetrics to server module

2024-09-03 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17430.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Move RequestChannel.Metrics and RequestChannel.RequestMetrics to server module
> --
>
> Key: KAFKA-17430
> URL: https://issues.apache.org/jira/browse/KAFKA-17430
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17449) Move Quota classes to server module

2024-08-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17449:
--

 Summary: Move Quota classes to server module
 Key: KAFKA-17449
 URL: https://issues.apache.org/jira/browse/KAFKA-17449
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17430) Move RequestChannel.Metrics and RequestChannel.RequestMetrics to server module

2024-08-27 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17430:
--

 Summary: Move RequestChannel.Metrics and 
RequestChannel.RequestMetrics to server module
 Key: KAFKA-17430
 URL: https://issues.apache.org/jira/browse/KAFKA-17430
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17353) Separate unsupported releases in downloads page on website

2024-08-27 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17353.

Resolution: Fixed

> Separate unsupported releases in downloads page on website
> --
>
> Key: KAFKA-17353
> URL: https://issues.apache.org/jira/browse/KAFKA-17353
> Project: Kafka
>  Issue Type: Task
>  Components: website
>Reporter: Mickael Maison
>Assignee: Federico Valeri
>Priority: Major
>
> Currently we list all releases on https://kafka.apache.org/downloads
> We should have the supported releases at the top clearly identified and list 
> archived releases below. As per 
> https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy?
>  we support the last 3 releases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17193) Pin external GitHub actions to specific git hash

2024-08-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17193.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Pin external GitHub actions to specific git hash
> 
>
> Key: KAFKA-17193
> URL: https://issues.apache.org/jira/browse/KAFKA-17193
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 4.0.0
>
>
> As per [https://infra.apache.org/github-actions-policy.html] we must pin any 
> GitHub action that is not from the apache/*, github/* and actions/* 
> namespaces to a specific git hash.
> We are currently using actions from aquasecurity and docker and these are not 
> pinned to specific git hashes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17353) Separate unsupported releases in downloads page on website

2024-08-16 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17353:
--

 Summary: Separate unsupported releases in downloads page on website
 Key: KAFKA-17353
 URL: https://issues.apache.org/jira/browse/KAFKA-17353
 Project: Kafka
  Issue Type: Task
  Components: website
Reporter: Mickael Maison


Currently we list all releases on https://kafka.apache.org/downloads

We should have the supported releases at the top clearly identified and list 
archived releases below. As per 
https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy?
 we support the last 3 releases.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17301) lz4-java is not maintained anymore

2024-08-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17301:
--

 Summary: lz4-java is not maintained anymore
 Key: KAFKA-17301
 URL: https://issues.apache.org/jira/browse/KAFKA-17301
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


lz4-java has not made a release since June 2021. It still depends on lz4 1.9.3 
which has a critical (however it does not seem exploitable in our case) CVE: 
[CVE-2021-3520|https://nvd.nist.gov/vuln/detail/CVE-2021-3520].





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17246) Simplify the process of building a test docker image

2024-08-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17246:
--

 Summary: Simplify the process of building a test docker image
 Key: KAFKA-17246
 URL: https://issues.apache.org/jira/browse/KAFKA-17246
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison


The docker_build_test.py script requires a URL and a signature file. This makes 
it hard to build a test image locally with a custom Kafka binary.

It would be nice to have a way to point it to a local distribution artifact 
instead and ignore the signature check.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15469) Document built-in configuration providers

2024-07-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15469.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Document built-in configuration providers
> -
>
> Key: KAFKA-15469
> URL: https://issues.apache.org/jira/browse/KAFKA-15469
> Project: Kafka
>  Issue Type: Task
>  Components: documentation
>Reporter: Mickael Maison
>Assignee: Paul Mellor
>Priority: Major
> Fix For: 4.0.0
>
>
> Kafka has 3 built-in ConfigProvider implementations:
> * DirectoryConfigProvider
> * EnvVarConfigProvider
> * FileConfigProvider
> These don't appear anywhere in the documentation. We should at least mention 
> them and probably even demonstrate how to use them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14614) Missing cluster tool script for Windows

2024-07-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14614.

Fix Version/s: 3.6.0
   Resolution: Fixed

> Missing cluster tool script for Windows
> ---
>
> Key: KAFKA-14614
> URL: https://issues.apache.org/jira/browse/KAFKA-14614
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mickael Maison
>Priority: Major
> Fix For: 3.6.0
>
>
> We have the kafka-cluster.sh script to run ClusterTool but there's no 
> matching script for Windows.
> We should check if other scripts are missing too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17193) Pin external GitHub actions to specific git hash

2024-07-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17193:
--

 Summary: Pin external GitHub actions to specific git hash
 Key: KAFKA-17193
 URL: https://issues.apache.org/jira/browse/KAFKA-17193
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


As per [https://infra.apache.org/github-actions-policy.html] we must pin any 
GitHub action that is not from the apache/*, github/* and actions/* namespaces 
to a specific git hash.

We are currently using actions from aquasecurity and docker and these are not 
pinned to specific git hashes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17137) Ensure Admin APIs are properly tested

2024-07-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17137:
--

 Summary: Ensure Admin APIs are properly tested
 Key: KAFKA-17137
 URL: https://issues.apache.org/jira/browse/KAFKA-17137
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Mickael Maison


A number of Admin client APIs don't have integration tests. While testing 3.8.0 
RC0 we discovered the Admin.describeTopics() API hung. This should have been 
caught by tests.

I suggest to create subtasks for each API that needs tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16254) Allow MM2 to fully disable offset sync feature

2024-07-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16254.

Resolution: Fixed

> Allow MM2 to fully disable offset sync feature
> --
>
> Key: KAFKA-16254
> URL: https://issues.apache.org/jira/browse/KAFKA-16254
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.5.0, 3.6.0, 3.7.0
>Reporter: Omnia Ibrahim
>Assignee: Omnia Ibrahim
>Priority: Major
>  Labels: need-kip
> Fix For: 3.9.0
>
>
> *Background:* 
> At the moment syncing offsets feature in MM2 is broken to 2 parts
>  # One is in `MirrorSourceTask` where we store the new recored's offset on 
> target cluster to {{offset_syncs}} internal topic after mirroring the record. 
> Before KAFKA-14610 in 3.5 MM2 used to just queue the offsets and publish them 
> later but since 3.5 this behaviour changed we now publish any offset syncs 
> that we've queued up, but have not yet been able to publish when 
> `MirrorSourceTask.commit` get invoked. This introduced an over head to commit 
> process.
>  # The second part is in checkpoints source task where we use the new record 
> offsets from {{offset_syncs}} and update {{checkpoints}} and 
> {{__consumer_offsets}} topics.
> *Problem:*
> For customers who only use MM2 for mirroring data and not interested in 
> syncing offsets feature they now can disable the second part of this feature 
> which is by disabling {{emit.checkpoints.enabled}} and/or 
> {{sync.group.offsets.enabled}} to disable emitting {{__consumer_offsets}} 
> topic but nothing disabling 1st part of the feature. 
> The problem get worse if they disabled MM2 from creating offset syncs 
> internal topic as 
> 1. this will increase throughput as MM2 will try to force trying to update 
> the offset with every mirrored batch which impacting the performance of our 
> MM2.
> 2. Get too many error logs because they don't create the sync offset topic as 
> they don't use the feature.
> *Possible solution:*
> Allow customers to fully disable the feature if they don't really need it 
> similar to how we fully can disable other MM2 features like heartbeat feature 
> by adding a new config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17072) Document broker decommissioning process with KRaft

2024-07-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17072:
--

 Summary: Document broker decommissioning process with KRaft
 Key: KAFKA-17072
 URL: https://issues.apache.org/jira/browse/KAFKA-17072
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Mickael Maison


When decommissioning a broker in KRaft mode, the broker also has to be 
explicitly unregistered. This is not mentioned anywhere in the documentation.

A broker not unregistered stays eligible for new partition assignment and will 
prevent bumping the metadata version if the remaining brokers are upgraded.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14109) Clean up JUnit 4 test infrastructure

2024-06-27 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14109.

Resolution: Duplicate

> Clean up JUnit 4 test infrastructure
> 
>
> Key: KAFKA-14109
> URL: https://issues.apache.org/jira/browse/KAFKA-14109
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
>
> We need to cleanup the setup in 
> https://issues.apache.org/jira/browse/KAFKA-14108 once the JUnit 4 to JUnit 5 
> migration is complete.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7342) Migrate streams modules to JUnit 5

2024-06-27 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7342.
---
Resolution: Duplicate

> Migrate streams modules to JUnit 5
> --
>
> Key: KAFKA-7342
> URL: https://issues.apache.org/jira/browse/KAFKA-7342
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Reporter: Ismael Juma
>Assignee: Christo Lolov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17027) Inconsistent casing in Selector metrics tags

2024-06-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17027:
--

 Summary: Inconsistent casing in Selector metrics tags
 Key: KAFKA-17027
 URL: https://issues.apache.org/jira/browse/KAFKA-17027
 Project: Kafka
  Issue Type: Improvement
  Components: core, metrics
Reporter: Mickael Maison


When creating metric tags for a Selector instance, we use "broker-id" in 
ControllerChannelManager, BrokerBlockingSender and ReplicaFetcherBlockingSend 
but we use "BrokerId" in NodeToControllerChannelManagerImpl.

Not only these casing are inconsistent for metrics tags for the same component 
(Selector) but it looks like neither match the casing the use for other broker 
metrics!

We seem to always use lower camel case for tags for broker metrics. For 
example, we have "networkProcessor", "clientId", "delayedOperation", 
"clientSoftwareName", "clientSoftwareVersion" as tags on other metrics.

Fixing this will require a KIP.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17008.

Resolution: Duplicate

> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944
> 
>
> Key: KAFKA-17008
> URL: https://issues.apache.org/jira/browse/KAFKA-17008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arushi Helms
>Priority: Major
>
> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
> I could not find an existing ticket for this, if there one then please mark 
> this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16998) Fix warnings in our Github actions

2024-06-19 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16998:
--

 Summary: Fix warnings in our Github actions
 Key: KAFKA-16998
 URL: https://issues.apache.org/jira/browse/KAFKA-16998
 Project: Kafka
  Issue Type: Task
  Components: build
Reporter: Mickael Maison


Most of our Github actions produce warnings, see 
[https://github.com/apache/kafka/actions/runs/9572915509|https://github.com/apache/kafka/actions/runs/9572915509.]
 for example.


It looks like we need to bump the version we use for actions/checkout, 
actions/setup-python, actions/upload-artifact to v4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15752) KRaft support in SaslSslAdminIntegrationTest

2024-06-17 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15752.

Fix Version/s: 3.9.0
   Resolution: Fixed

> KRaft support in SaslSslAdminIntegrationTest
> 
>
> Key: KAFKA-15752
> URL: https://issues.apache.org/jira/browse/KAFKA-15752
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Gantigmaa Selenge
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.9.0
>
>
> The following tests in SaslSslAdminIntegrationTest in 
> core/src/test/scala/integration/kafka/api/SaslSslAdminIntegrationTest.scala 
> need to be updated to support KRaft
> 95 : def testAclOperations(): Unit = {
> 116 : def testAclOperations2(): Unit = {
> 142 : def testAclDescribe(): Unit = {
> 169 : def testAclDelete(): Unit = {
> 219 : def testLegacyAclOpsNeverAffectOrReturnPrefixed(): Unit = {
> 256 : def testAttemptToCreateInvalidAcls(): Unit = {
> 351 : def testAclAuthorizationDenied(): Unit = {
> 383 : def testCreateTopicsResponseMetadataAndConfig(): Unit = {
> Scanned 527 lines. Found 0 KRaft tests out of 8 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15751) KRaft support in BaseAdminIntegrationTest

2024-06-17 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15751.

Fix Version/s: 3.9.0
   Resolution: Fixed

> KRaft support in BaseAdminIntegrationTest
> -
>
> Key: KAFKA-15751
> URL: https://issues.apache.org/jira/browse/KAFKA-15751
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Gantigmaa Selenge
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.9.0
>
>
> The following tests in BaseAdminIntegrationTest in 
> core/src/test/scala/integration/kafka/api/BaseAdminIntegrationTest.scala need 
> to be updated to support KRaft
> 70 : def testCreateDeleteTopics(): Unit = {
> 163 : def testAuthorizedOperations(): Unit = {
> Scanned 259 lines. Found 0 KRaft tests out of 2 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16974) KRaft support in SslAdminIntegrationTest

2024-06-17 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16974:
--

 Summary: KRaft support in SslAdminIntegrationTest
 Key: KAFKA-16974
 URL: https://issues.apache.org/jira/browse/KAFKA-16974
 Project: Kafka
  Issue Type: Task
  Components: core
Reporter: Mickael Maison


This class needs to be updated to support KRaft



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16865) Admin.describeTopics behavior change after KIP-966

2024-06-12 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16865.

Resolution: Fixed

> Admin.describeTopics behavior change after KIP-966
> --
>
> Key: KAFKA-16865
> URL: https://issues.apache.org/jira/browse/KAFKA-16865
> Project: Kafka
>  Issue Type: Task
>  Components: admin, clients
>Affects Versions: 3.8.0
>Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
> Fix For: 3.9.0
>
>
> Running the following code produces different behavior between ZooKeeper and 
> KRaft:
> {code:java}
> DescribeTopicsOptions options = new 
> DescribeTopicsOptions().includeAuthorizedOperations(false);
> TopicCollection topics = 
> TopicCollection.ofTopicNames(Collections.singletonList(topic));
> DescribeTopicsResult describeTopicsResult = admin.describeTopics(topics, 
> options);
> TopicDescription topicDescription = 
> describeTopicsResult.topicNameValues().get(topic).get();
> System.out.println(topicDescription.authorizedOperations());
> {code}
> With ZooKeeper this print null, and with KRaft it prints [ALTER, READ, 
> DELETE, ALTER_CONFIGS, CREATE, DESCRIBE_CONFIGS, WRITE, DESCRIBE].
> The Admin.getTopicDescriptionFromDescribeTopicsResponseTopic does not take 
> into account the options provided to describeTopics() and always populates 
> the authorizedOperations field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-5261) Performance improvement of SimpleAclAuthorizer

2024-06-07 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-5261.
---
Resolution: Won't Do

> Performance improvement of SimpleAclAuthorizer
> --
>
> Key: KAFKA-5261
> URL: https://issues.apache.org/jira/browse/KAFKA-5261
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Stephane Maarek
>Priority: Major
>
> Currently, looking at the KafkaApis class, it seems that every request going 
> through Kafka is also going through an authorize check:
> {code}
>   private def authorize(session: Session, operation: Operation, resource: 
> Resource): Boolean =
> authorizer.forall(_.authorize(session, operation, resource))
> {code}
> The SimpleAclAuthorizer logic runs through checks which all look to be done 
> in linear time (except on first run) proportional to the number of acls on a 
> specific resource. This operation is re-run every time a client tries to use 
> a Kafka Api, especially on the very often called `handleProducerRequest` and  
> `handleFetchRequest`
> I believe a cache could be built to store the result of the authorize call, 
> possibly allowing more expensive authorize() calls to happen, and reducing 
> greatly the CPU usage in the long run. The cache would be invalidated every 
> time a change happens to aclCache
> Thoughts before I try giving it a go with a PR? 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16881) InitialState type leaks into the Connect REST API OpenAPI spec

2024-06-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16881:
--

 Summary: InitialState type leaks into the Connect REST API OpenAPI 
spec
 Key: KAFKA-16881
 URL: https://issues.apache.org/jira/browse/KAFKA-16881
 Project: Kafka
  Issue Type: Task
  Components: connect
Affects Versions: 3.7.0
Reporter: Mickael Maison


In our [OpenAPI spec 
file|https://kafka.apache.org/37/generated/connect_rest.yaml] we have the 
following:
{noformat}
CreateConnectorRequest:
      type: object
      properties:
        config:
          type: object
          additionalProperties:
            type: string
        initialState:
          type: string
          enum:
          - RUNNING
          - PAUSED
          - STOPPED
        initial_state:
          type: string
          enum:
          - RUNNING
          - PAUSED
          - STOPPED
          writeOnly: true
        name:
          type: string{noformat}
Only initial_state is a valid field, InitialState should not be present.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16865) Admin.describeTopics behavior change after KIP-966

2024-05-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16865:
--

 Summary: Admin.describeTopics behavior change after KIP-966
 Key: KAFKA-16865
 URL: https://issues.apache.org/jira/browse/KAFKA-16865
 Project: Kafka
  Issue Type: Task
  Components: admin, clients
Reporter: Mickael Maison


Running the following code produces different behavior between ZooKeeper and 
KRaft:


{code:java}
DescribeTopicsOptions options = new 
DescribeTopicsOptions().includeAuthorizedOperations(false);
TopicCollection topics = 
TopicCollection.ofTopicNames(Collections.singletonList(topic));
DescribeTopicsResult describeTopicsResult = admin.describeTopics(topics, 
options);
TopicDescription topicDescription = 
describeTopicsResult.topicNameValues().get(topic).get();
System.out.println(topicDescription.authorizedOperations());
{code}

With ZooKeeper this print null, and with KRaft it prints [ALTER, READ, DELETE, 
ALTER_CONFIGS, CREATE, DESCRIBE_CONFIGS, WRITE, DESCRIBE].

The Admin.getTopicDescriptionFromDescribeTopicsResponseTopic does not take into 
account the options provided to describeTopics() and always populates the 
authorizedOperations field.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16859) Cleanup check if tiered storage is enabled

2024-05-28 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16859:
--

 Summary: Cleanup check if tiered storage is enabled
 Key: KAFKA-16859
 URL: https://issues.apache.org/jira/browse/KAFKA-16859
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


We have 2 ways to detect whether tiered storage is enabled:
- KafkaConfig.isRemoteLogStorageSystemEnabled
- KafkaConfig.remoteLogManagerConfig().enableRemoteStorageSystem()

We use both in various files. We should stick with one way to do it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16825) CVE vulnerabilities in Jetty and netty

2024-05-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16825.

Fix Version/s: 3.8.0
   Resolution: Fixed

> CVE vulnerabilities in Jetty and netty
> --
>
> Key: KAFKA-16825
> URL: https://issues.apache.org/jira/browse/KAFKA-16825
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.7.0
>Reporter: mooner
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> There is a vulnerability (CVE-2024-29025) in the passive dependency software 
> Netty used by Kafka, which has been fixed in version 4.1.108.Final.
> There is also a vulnerability (CVE-2024-22201) in the passive dependency 
> software Jetty, which has been fixed in version 9.4.54.v20240208.
> When will Kafka upgrade the versions of Netty and Jetty to fix these two 
> vulnerabilities?
> Reference website:
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025
> https://nvd.nist.gov/vuln/detail/CVE-2024-22201



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12399) Deprecate Log4J Appender KIP-719

2024-05-22 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12399.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Deprecate Log4J Appender KIP-719
> 
>
> Key: KAFKA-12399
> URL: https://issues.apache.org/jira/browse/KAFKA-12399
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Dongjin Lee
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> As a following job of KAFKA-9366, we have to entirely remove the log4j 1.2.7 
> dependency from the classpath by removing dependencies on log4j-appender.
> KIP-719: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Deprecate+Log4J+Appender



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7632) Support Compression Level

2024-05-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7632.
---
Fix Version/s: 3.8.0
 Assignee: Mickael Maison  (was: Dongjin Lee)
   Resolution: Fixed

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a producer, broker, and topic configuration setting 
> will allow the user to adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16771) First log directory printed twice when formatting storage

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16771:
--

 Summary: First log directory printed twice when formatting storage
 Key: KAFKA-16771
 URL: https://issues.apache.org/jira/browse/KAFKA-16771
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 3.7.0
Reporter: Mickael Maison


If multiple log directories are set, when running bin/kafka-storage.sh format, 
the first directory is printed twice. For example:

{noformat}
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties --release-version 3.6
metaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, 
dirs={/tmp/kraft-combined-logs: EMPTY, /tmp/kraft-combined-logs2: EMPTY})
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs2 with metadata.version 3.6-IV2.
{noformat}






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16769) Delete deprecated add.source.alias.to.metrics configuration

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16769:
--

 Summary: Delete deprecated add.source.alias.to.metrics 
configuration
 Key: KAFKA-16769
 URL: https://issues.apache.org/jira/browse/KAFKA-16769
 Project: Kafka
  Issue Type: Task
  Components: mirrormaker
Reporter: Mickael Maison
Assignee: Mickael Maison
 Fix For: 4.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16646) Consider only running the CVE scanner action on apache/kafka and not in forks

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16646:
--

 Summary: Consider only running the CVE scanner action on 
apache/kafka and not in forks
 Key: KAFKA-16646
 URL: https://issues.apache.org/jira/browse/KAFKA-16646
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison


Currently the CVE scanner action is failing due to CVEs in the base image. It 
seems that anybody that has a fork is getting daily emails about it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16645) CVEs in 3.7.0 docker image

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16645:
--

 Summary: CVEs in 3.7.0 docker image
 Key: KAFKA-16645
 URL: https://issues.apache.org/jira/browse/KAFKA-16645
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.7.0
Reporter: Mickael Maison


Our Docker Image CVE Scanner GitHub action reports 2 high CVEs in our base 
image:

apache/kafka:3.7.0 (alpine 3.19.1)
==
Total: 2 (HIGH: 2, CRITICAL: 0)

┌──┬┬──┬┬───┬───┬─┐
│ Library  │ Vulnerability  │ Severity │ Status │ Installed Version │ Fixed 
Version │Title│
├──┼┼──┼┼───┼───┼─┤
│ libexpat │ CVE-2023-52425 │ HIGH │ fixed  │ 2.5.0-r2  │ 2.6.0-r0  
│ expat: parsing large tokens can trigger a denial of service │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2023-52425  │
│  ├┤  ││   
├───┼─┤
│  │ CVE-2024-28757 │  ││   │ 2.6.2-r0  
│ expat: XML Entity Expansion │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2024-28757  │
└──┴┴──┴┴───┴───┴─┘

Looking at the 
[KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka#KIP975:DockerImageforApacheKafka-WhatifweobserveabugoracriticalCVEinthereleasedApacheKafkaDockerImage?]
 that introduced the docker images, it seems we should release a bugfix when 
high CVEs are detected. It would be good to investigate and assess whether 
Kafka is impacted or not.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16478) Links for Kafka 3.5.2 release are broken

2024-04-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16478.

Resolution: Fixed

> Links for Kafka 3.5.2 release are broken
> 
>
> Key: KAFKA-16478
> URL: https://issues.apache.org/jira/browse/KAFKA-16478
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 3.5.2
>Reporter: Philipp Trulson
>Assignee: Mickael Maison
>Priority: Major
>
> While trying to update our setup, I noticed that the download links for the 
> 3.5.2 links are broken. They all point to a different host and also contain 
> an additional `/kafka` in their URL. Compare:
> not working:
> [https://downloads.apache.org/kafka/kafka/3.5.2/RELEASE_NOTES.html]
> working:
> [https://archive.apache.org/dist/kafka/3.5.1/RELEASE_NOTES.html]
> [https://downloads.apache.org/kafka/3.6.2/RELEASE_NOTES.html]
> This goes for all links in the release - archives, checksums, signatures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15882) Scheduled nightly github actions workflow for CVE reports on published docker images

2024-03-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15882.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Scheduled nightly github actions workflow for CVE reports on published docker 
> images
> 
>
> Key: KAFKA-15882
> URL: https://issues.apache.org/jira/browse/KAFKA-15882
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Vedarth Sharma
>Assignee: Vedarth Sharma
>Priority: Major
> Fix For: 3.8.0
>
>
> This scheduled github actions workflow will check supported published docker 
> images for CVEs and generate reports.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16206) KRaftMigrationZkWriter tries to delete deleted topic configs twice

2024-03-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16206.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaftMigrationZkWriter tries to delete deleted topic configs twice
> --
>
> Key: KAFKA-16206
> URL: https://issues.apache.org/jira/browse/KAFKA-16206
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft, migration
>Reporter: David Arthur
>Assignee: Alyssa Huang
>Priority: Minor
> Fix For: 3.8.0
>
>
> When deleting a topic, we see spurious ERROR logs from 
> kafka.zk.migration.ZkConfigMigrationClient:
>  
> {code:java}
> Did not delete ConfigResource(type=TOPIC, name='xxx') since the node did not 
> exist. {code}
> This seems to happen because ZkTopicMigrationClient#deleteTopic is deleting 
> the topic, partitions, and config ZNodes in one shot. Subsequent calls from 
> KRaftMigrationZkWriter to delete the config encounter a NO_NODE since the 
> ZNode is already gone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16355) ConcurrentModificationException in InMemoryTimeOrderedKeyValueBuffer.evictWhile

2024-03-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16355:
--

 Summary: ConcurrentModificationException in 
InMemoryTimeOrderedKeyValueBuffer.evictWhile
 Key: KAFKA-16355
 URL: https://issues.apache.org/jira/browse/KAFKA-16355
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 3.5.1
Reporter: Mickael Maison


While a Streams application was restoring its state after an outage, it hit the 
following:

org.apache.kafka.streams.errors.StreamsException: Exception caught in process. 
taskId=0_16, processor=KSTREAM-SOURCE-00, topic=, partition=16, 
offset=454875695, stacktrace=java.util.ConcurrentModificationException
at java.base/java.util.TreeMap$PrivateEntryIterator.remove(TreeMap.java:1507)
at 
org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.evictWhile(InMemoryTimeOrderedKeyValueBuffer.java:423)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.enforceConstraints(KTableSuppressProcessorSupplier.java:178)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.process(KTableSuppressProcessorSupplier.java:165)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$setFlushListener$4(MeteredWindowStore.java:181)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:124)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:99)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:158)
at 
org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:252)
at 
org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:302)
at 
org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:179)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:173)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:47)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$put$5(MeteredWindowStore.java:201)
at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:200)
at 
org.apache.kafka.streams.processor.internals.AbstractReadWriteDecorator$WindowStoreReadWriteDecorator.put(AbstractReadWriteDecorator.java:201)
at 
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:138)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamPeek$KStreamPeekProcessor.process(KStreamPeek.java:42)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.

[jira] [Created] (KAFKA-16347) Bump ZooKeeper to 3.8.4

2024-03-06 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16347:
--

 Summary: Bump ZooKeeper to 3.8.4
 Key: KAFKA-16347
 URL: https://issues.apache.org/jira/browse/KAFKA-16347
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.1, 3.7.0
Reporter: Mickael Maison
Assignee: Mickael Maison


ZooKeeper 3.8.4 was released and contains a few CVE fixes: 
https://zookeeper.apache.org/doc/r3.8.4/releasenotes.html

We should update 3.6, 3.7 and trunk to use this new ZooKeeper release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16318) Add javadoc to KafkaMetric

2024-03-01 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16318:
--

 Summary: Add javadoc to KafkaMetric
 Key: KAFKA-16318
 URL: https://issues.apache.org/jira/browse/KAFKA-16318
 Project: Kafka
  Issue Type: Bug
  Components: docs
Reporter: Mickael Maison


KafkaMetric is part of the public API but it's missing javadoc describing the 
class and several of its methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16292) Revamp upgrade.html page

2024-02-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16292:
--

 Summary: Revamp upgrade.html page 
 Key: KAFKA-16292
 URL: https://issues.apache.org/jira/browse/KAFKA-16292
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Mickael Maison


At the moment we keep adding to this page for each release. The upgrade.html 
file is now over 2000 line long. It still contains steps for upgrading from 0.8 
to 0.9! These steps are already in the docs for each version which can be 
accessed via //documentation.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13566) producer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13566.

Resolution: Duplicate

> producer exponential backoff implementation
> ---
>
> Key: KAFKA-13566
> URL: https://issues.apache.org/jira/browse/KAFKA-13566
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13567) adminClient exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13567.

Resolution: Duplicate

> adminClient exponential backoff implementation
> --
>
> Key: KAFKA-13567
> URL: https://issues.apache.org/jira/browse/KAFKA-13567
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13565) consumer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13565.

Fix Version/s: 3.7.0
   Resolution: Duplicate

> consumer exponential backoff implementation
> ---
>
> Key: KAFKA-13565
> URL: https://issues.apache.org/jira/browse/KAFKA-13565
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14576) Move ConsoleConsumer to tools

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14576.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Move ConsoleConsumer to tools
> -
>
> Key: KAFKA-14576
> URL: https://issues.apache.org/jira/browse/KAFKA-14576
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14822) Allow restricting File and Directory ConfigProviders to specific paths

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14822.

Fix Version/s: 3.8.0
 Assignee: Gantigmaa Selenge  (was: Mickael Maison)
   Resolution: Fixed

> Allow restricting File and Directory ConfigProviders to specific paths
> --
>
> Key: KAFKA-14822
> URL: https://issues.apache.org/jira/browse/KAFKA-14822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
>  Labels: need-kip
> Fix For: 3.8.0
>
>
> In sensitive environments, it would be interesting to be able to restrict the 
> files that can be accessed by the built-in configuration providers.
> For example:
> config.providers=directory
> config.providers.directory.class=org.apache.kafka.connect.configs.DirectoryConfigProvider
> config.providers.directory.path=/var/run
> Then if a caller tries to access another path, for example
> ssl.keystore.password=${directory:/etc/passwd:keystore-password}
> it would be rejected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16246) Cleanups in ConsoleConsumer

2024-02-13 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16246:
--

 Summary: Cleanups in ConsoleConsumer
 Key: KAFKA-16246
 URL: https://issues.apache.org/jira/browse/KAFKA-16246
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Mickael Maison


When rewriting ConsoleConsumer in Java, in order to keep the conversion and 
review process simple we mimicked the logic flow and types used in the Scala 
implementation.

Once the rewrite is merged, we should refactor some of the logic to make it 
more Java-like. This include removing Optional where it makes sense and moving 
all the argument checking logic into ConsoleConsumerOptions.


See https://github.com/apache/kafka/pull/15274 for pointers.

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   >