[jira] [Created] (KAFKA-15055) Request access to shared - ppc64le node

2023-06-04 Thread Vaibhav (Jira)
Vaibhav created KAFKA-15055: --- Summary: Request access to shared - ppc64le node Key: KAFKA-15055 URL: https://issues.apache.org/jira/browse/KAFKA-15055 Project: Kafka Issue Type: Task

Errors related to Kafka API checks

2023-06-04 Thread odashima.tatsuya
Hi, there, I have a problem when I routinely run the following command line. In addition, I am experiencing process hangs on one Broker at irregular intervals, but I do not know the relevance of the error content. Has anyone had the same problem to help me? ○Kafka Broker Server Information

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1895

2023-06-04 Thread Apache Jenkins Server
See Changes: -- [...truncated 567430 lines...] [2023-06-04T20:46:22.160Z] [2023-06-04T20:46:22.160Z] > Task :streams:javadoc [2023-06-04T20:46:22.160Z] > Task

Re: [DISCUSS] KIP-852 Optimize calculation of size for log in remote tier

2023-06-04 Thread Kamal Chandraprakash
Hi Divij, Thanks for the KIP! Sorry for the late reply. Can you explain the rejected alternative-3? Store the cumulative size of remote tier log in-memory at RemoteLogManager "*Cons*: Every time a broker starts-up, it will scan through all the segments in the remote tier to initialise the

Re: [DISCUSS] KIP-938: Add more metrics for measuring KRaft performance

2023-06-04 Thread Divij Vaidya
Thanks for the KIP Colin. I liked the rationale section which clearly explains why the metrics are required and what they might indicate. 1. I have a question about the "CurrentMetadataVersion" metric. Correct me if I'm wrong, but I assume that we are referring to

Re: [VOTE] KIP-872: Add Serializer#serializeToByteBuffer() to reduce memory copying

2023-06-04 Thread Kamal Chandraprakash
+1 (non-binding). Thanks for the improvement! Thanks, Kamal On Wed, May 31, 2023 at 8:23 AM ziming deng wrote: > Hello ShunKang, > > +1(binding) from me > > -- > Thanks, > Ziming > > > On May 30, 2023, at 20:07, ShunKang Lin > wrote: > > > > Hi all, > > > > Bump this thread again and see if