[jira] [Resolved] (KAFKA-15689) KRaftMigrationDriver not logging the skipped event when expected state is wrong

2023-10-30 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-15689.
---
Fix Version/s: 3.7.0
   Resolution: Fixed

> KRaftMigrationDriver not logging the skipped event when expected state is 
> wrong
> ---
>
> Key: KAFKA-15689
> URL: https://issues.apache.org/jira/browse/KAFKA-15689
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Minor
> Fix For: 3.7.0
>
>
> The KRaftMigrationDriver.checkDriverState is used in multiple implementations 
> of the 
> MigrationEvent base class but when it comes to log that an event was skipped 
> because the expected state is wrong, it always log "KRafrMigrationDriver" 
> instead of the skipped event.
> For example, a logging line could be like this:
> {code:java}
> 2023-10-25 12:17:25,460 INFO [KRaftMigrationDriver id=5] Expected driver 
> state ZK_MIGRATION but found SYNC_KRAFT_TO_ZK. Not running this event 
> KRaftMigrationDriver. 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-5-migration-driver-event-handler] {code}
> This is because its code has something like this:
> {code:java}
> log.info("Expected driver state {} but found {}. Not running this event {}.",
> expectedState, migrationState, this.getClass().getSimpleName()); {code}
> Of course, the "this" is referring to the KRafrMigrationDriver class.
> It should print the specific skipped event instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-977: Partition-Level Throughput Metrics

2023-10-30 Thread Divij Vaidya
Hey *Qichao*

Thank you for the update on the KIP. I like the idea of incremental
delivery and adding which metrics support this verbosity as a later KIP.
But I also want to ensure that we wouldn't have to change the current
config when adding that in future. Hence, we need some discussion on it in
the scope of the KIP.

About the dynamic configuration:
Do we need to add the "default" mode? I am asking because it may inhibit us
from adding the allowList option in future. Instead if we could rephrase
the config as: "metric.verbosity.high" which takes values as a regEx
(default will be empty), then we wouldn't have to worry about
future-proofness of this KIP. Notably this is an existing pattern used by
KIP-544.
Alternatively, if you choose to stick to the current configuration pattern,
please provide information on how this config will look like when we add
allow listing in future.

About the perf test:
Motivation - The motivation of perf test is to provide users with a hint on
what perf penalty they can expect and whether default has degraded perf
(due to additional "empty" labels).
Dimensions of the test could be - scrape interval, utilization of broker
(no traffic vs. heavy traffic), number of partitions (small/200 to
large/2k).
Things to collect during perf test - number of mbeans registered with JMX,
CPU, heap utilization
Expected results - As long as we can prove that there is no additional
usage (significant) of CPU or heap after this change for the "default
mode", we should be good. For the "high" mode, we should document the
expected increase for users but it is not a blocker to implement this KIP.


*Kirk*, I have tried to clarify the expectation on performance, does that
address your question earlier? Also, I am happy with having a Kafka level
dynamic config that we can use to filter our metric/dimensionality since we
have a precedence at KIP-544. Hence, my suggestion to push this filtering
to metric library can be ignored.

--
Divij Vaidya



On Sat, Oct 28, 2023 at 11:37 AM Qichao Chu  wrote:

> Hello Everyone,
>
> Can I ask for some feedback regarding KIP-977
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-977%3A+Partition-Level+Throughput+Metrics
> >
> ?
>
> Best,
> Qichao Chu
> Software Engineer | Data - Kafka
> [image: Uber] 
>
>
> On Mon, Oct 16, 2023 at 7:34 PM Qichao Chu  wrote:
>
> > Hi Divij and Kirk,
> >
> > Thank you both for providing the valuable feedback and sorry for the
> > delay. I have just updated the KIP to address the comments.
> >
> >1. Instead of using a topic-level control, global verbosity control
> >makes more sense if we want to extend it in the future. It would be
> very
> >difficult if we want to apply the topic allowlist everywhere
> >2. Also, the topic allowlist was not dynamic which makes everything
> >quite complex, especially for the topic lifecycle management. By
> using the
> >dynamic global config, debugging could be easier, and management of
> the
> >config is also made easier.
> >3. More details are included in the test section.
> >
> > One thing that still misses is the performance numbers. I will get it
> > ready with our internal clusters and share out soon.
> >
> > Many thanks for the review!
> > Qichao
> >
> > On Tue, Sep 12, 2023 at 8:31 AM Kirk True  wrote:
> >
> >> Oh, and does metrics.partition.level.reporting.topics allow for regex?
> >>
> >> > On Sep 12, 2023, at 8:26 AM, Kirk True  wrote:
> >> >
> >> > Hi Qichao,
> >> >
> >> > Thanks for the KIP!
> >> >
> >> > Divij—questions/comments inline...
> >> >
> >> >> On Sep 11, 2023, at 4:32 AM, Divij Vaidya 
> >> wrote:
> >> >>
> >> >> Thank you for the proposal Qichao.
> >> >>
> >> >> I agree with the motivation here and understand the tradeoff here
> >> >> between observability vs. increased metric dimensions (metric fan-out
> >> >> as you say in the KIP).
> >> >>
> >> >> High level comments:
> >> >>
> >> >> 1. I would urge you to consider the extensibility of the proposal for
> >> >> other types of metrics. Tomorrow, if we want to selectively add
> >> >> "partition" dimension to another metric, would we have to modify the
> >> >> code where each metric is emitted? Alternatively, could we abstract
> >> >> out this config in a "Kafka Metrics" library. The code provides all
> >> >> information about this library and this library can choose which
> >> >> dimensions it wants to add to the final metrics that are emitted
> based
> >> >> on declarative configuration.
> >> >
> >> > I’d agree with this if it doesn’t place a burden on the callers. Are
> >> there any potential call sites that don’t have the partition information
> >> readily available?
> >> >
> >> >> 2. Can we offload the handling of this dimension filtering to the
> >> >> metric framework? Have you explored whether prometheus or other
> >> >> libraries provide the ability to dynamically change dimensions
> >> >> associated with metrics?
> >> >
> >> > I’m not familiar with the downs

[jira] [Created] (KAFKA-15754) The kafka-storage tool can generate UUID starting with "-"

2023-10-30 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-15754:
--

 Summary: The kafka-storage tool can generate UUID starting with "-"
 Key: KAFKA-15754
 URL: https://issues.apache.org/jira/browse/KAFKA-15754
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Paolo Patierno


Using the kafka-storage.sh tool, it seems that it can still generate a UUID 
starting with a dash "-", which then breaks how the argparse4j library works. 
With such an UUID (i.e. -rmdB0m4T4–Y4thlNXk4Q in my case) the tool exits with 
the following error:
kafka-storage: error: argument --cluster-id/-t: expected one argument
Said that, it seems that this problem was already addressed in the 
Uuid.randomUuid method which keeps generating a new UUID until it doesn't start 
with "-". This is the commit addressing it 
[https://github.com/apache/kafka/commit/5c1dd493d6f608b566fdad5ab3a896cb13622bce]

The problem is that when the toString is called on the Uuid instance, it's 
going to do a Base64 encoding on the generated UUID this way:
{code:java}
Base64.getUrlEncoder().withoutPadding().encodeToString(getBytesFromUuid()); 
{code}
Not sure why, but the code is using an URL (safe) encoder which, taking a look 
at the Base64 class in Java, is using a RFC4648_URLSAFE encoder using the 
following alphabet:
 
{code:java}
private static final char[] toBase64URL = new char[]{'A', 'B', 'C', 'D', 'E', 
'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 
'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 
'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', 
'1', '2', '3', '4', '5', '6', '7', '8', '9', '-', '_'}; {code}
which as you can see includes the "-" character.
So despite the current Uuid.randomUuid is avoiding the generation of a UUID 
containing a "-", the Base64 encoded result can contain a "-" instead 
eventually.
 
I was wondering if there is any good reason for using a Base64 URL encoder and 
not just the RFC4648 (not URL safe) which uses the common Base64 alphabet not 
containing the "-".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New Kafka PMC Member: Satish Duggana

2023-10-30 Thread Bruno Cadonna

Congrats, Satish!

Bruno

On 10/29/23 2:42 PM, John Roesler wrote:

Congratulations, Satish!
-John

On Sun, Oct 29, 2023, at 08:09, Randall Hauch wrote:

Congratulations, Satish!

On Sun, Oct 29, 2023 at 1:47 AM Tom Bentley  wrote:


Congratulations!

On Sun, 29 Oct 2023 at 5:41 PM, Guozhang Wang 
wrote:


Congratulations Satish!

On Sat, Oct 28, 2023 at 12:59 AM Luke Chen  wrote:


Congrats Satish!

Luke

On Sat, Oct 28, 2023 at 11:16 AM ziming deng 


wrote:


Congratulations Satish!


On Oct 27, 2023, at 23:03, Jun Rao 

wrote:


Hi, Everyone,

Satish Duggana has been a Kafka committer since 2022. He has been

very

instrumental to the community since becoming a committer. It's my

pleasure

to announce that Satish is now a member of Kafka PMC.

Congratulations Satish!

Jun
on behalf of Apache Kafka PMC










Re: [ANNOUNCE] New Kafka PMC Member: Satish Duggana

2023-10-30 Thread Josep Prat
Congrats Satish!

Best,

———
Josep Prat

Aiven Deutschland GmbH

Alexanderufer 3-7, 10117 Berlin

Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen

m: +491715557497

w: aiven.io

e: josep.p...@aiven.io

On Mon, Oct 30, 2023, 11:37 Bruno Cadonna  wrote:

> Congrats, Satish!
>
> Bruno
>
> On 10/29/23 2:42 PM, John Roesler wrote:
> > Congratulations, Satish!
> > -John
> >
> > On Sun, Oct 29, 2023, at 08:09, Randall Hauch wrote:
> >> Congratulations, Satish!
> >>
> >> On Sun, Oct 29, 2023 at 1:47 AM Tom Bentley 
> wrote:
> >>
> >>> Congratulations!
> >>>
> >>> On Sun, 29 Oct 2023 at 5:41 PM, Guozhang Wang <
> guozhang.wang...@gmail.com>
> >>> wrote:
> >>>
>  Congratulations Satish!
> 
>  On Sat, Oct 28, 2023 at 12:59 AM Luke Chen  wrote:
> >
> > Congrats Satish!
> >
> > Luke
> >
> > On Sat, Oct 28, 2023 at 11:16 AM ziming deng <
> dengziming1...@gmail.com
> 
> > wrote:
> >
> >> Congratulations Satish!
> >>
> >>> On Oct 27, 2023, at 23:03, Jun Rao 
> >>> wrote:
> >>>
> >>> Hi, Everyone,
> >>>
> >>> Satish Duggana has been a Kafka committer since 2022. He has been
>  very
> >>> instrumental to the community since becoming a committer. It's my
> >> pleasure
> >>> to announce that Satish is now a member of Kafka PMC.
> >>>
> >>> Congratulations Satish!
> >>>
> >>> Jun
> >>> on behalf of Apache Kafka PMC
> >>
> >>
> 
> 
> >>>
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2339

2023-10-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 215104 lines...]
Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testEmptyWrite() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testEmptyWrite() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testReadMigrateAndWriteProducerId() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testReadMigrateAndWriteProducerId() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testMigrateTopicConfigs() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testMigrateTopicConfigs() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testMigrateEmptyZk() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testMigrateEmptyZk() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testClaimAbsentController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testClaimAbsentController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testIdempotentCreateTopics() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testIdempotentCreateTopics() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testCreateNewTopic() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testCreateNewTopic() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testConnection() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testConnection() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testGetAclExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testGetAclExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() STARTED

Gradle Test Run :core:test > Gradle Test Executor 90 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() PASSED

Gradle Test Run :core:test > Grad

Re: [VOTE] KIP-975: Docker Image for Apache Kafka

2023-10-30 Thread Krishna Agarwal
Hi all,

Thanks for participating in the discussion and voting! KIP-975 has been
accepted with the following +1 votes:

- Stanislav Kozlovski (binding)
- Ismael Juma (binding)
- Manikumar (binding)
- David Jacot (binding)

The target release for this KIP is 3.7.0

Regards,
Krishna

On Fri, Oct 27, 2023 at 10:05 AM Krishna Agarwal <
krishna0608agar...@gmail.com> wrote:

> Hi,
> I'd like to call a vote on KIP-975 which aims to publish an official
> docker image for Apache Kafka.
>
> KIP -
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka
>
> Discussion thread -
> https://lists.apache.org/thread/3g43hps2dmkyxgglplrlwpsf7vkywkyy
>
> Regards,
> Krishna
>


Re: [DISCUSS] KIP-963: Upload and delete lag metrics in Tiered Storage

2023-10-30 Thread Christo Lolov
Heya Jorge,

Thank you for the insightful comments!

1. I see a value in such latency metrics but in my opinion the correct
location for such metrics is in the plugins providing the underlying
functionality. What are your thoughts on the matter?

2. Okay, I will look for and adjust the formatting today/tomorrow!

3.1 Done.
3.2 Sure, I will add this to the KIP later today, the suggestion makes
sense to me. However, my question is, would you still find value in
emitting metrics for all three i.e. RemoteCopyLagRecords,
RemoteCopyLagBytes and RemoteCopyLagSegments or would you only keep
RemoteCopyLagBytes and RemoteCopyLagSegments?
3.3. Yes, RemoteDeleteLagRecords was supposed to be an equivalent of
RemoteCopyLagRecords. Once I have your opinion on 3.2 I will make the
respective changes.
3.4. I envision these metrics to be added to Kafka rather than the plugins.
Today Kafka sends deletes to remote storage but does not know whether those
segments have been deleted immediately when the request has been sent or
have been given to a background process to carry out the actual reclamation
of space. The purpose of this metric is to give an estimate in time which
says "hey, we have called this many segments or bytes to be deleted".

4. I believe this goes down the same line of thinking as what you mentioned
in 3.3 - have I misunderstood something?

5. I have on a number of occasions found I do not have a metric to quickly
point me to what part of tiered storage functionality is experiencing an
issue, in some scenarios a follower failing to build an auxiliary state. An
increase in the number of BuildRemoteLogAuxState requests per second can
uncover problems for specific topics warranting a further investigation,
which I tend to find difficult to judge purely based on parsing log
statements. An increase in the number of errors can quickly zone in on
followers failing as part of tiered storage and point me to look in the
logs specifically for that component.

6. I find it useful start my investigations with respect to tiering
problems by checking the rough size distribution of topics in remote. From
then on I try to correlate whether a historically high-volume topic started
experiencing a decrease in volume due to a decrease in produce traffic to
that topic or due to an increase in lag on local storage due to the broker
slowing down for whatever reason. Besides correlation I would use such a
metric to also confirm whether my rate calculations are correct i.e. if
topic A receives X MB/s and rolls a segment every Y seconds with an upload
rate of Z MB/s do I see that much data actually being written in remote
storage. Do these two scenarios demonstrate the usefulness I would have
from such a metric and do the benefits make sense to you?

7. I agree. I have changed TotalRemoteLogSizeComputationTime,
TotalRemoteLogSizeBytes, and TotalRemoteLogMetadataCount to
RemoteLogSizeComputationTime, RemoteLogSizeBytes and RemoteLogMetadataCount
respectively.

On Fri, 27 Oct 2023 at 15:24, Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Hi Christo,
>
> Thanks for proposing KIP, this metrics will certainly be useful to operate
> Kafka Tiered Storage as it becomes production-ready.
>
> 1. Given that the scope of the KIPs has grown to cover more metrics, what
> do you think about introducing latency metrics for RSM operations?
> Copy and delete time metrics are quite obvious/simple on what they
> represent; but fetch latency metrics would be helpful as remote fetching
> clients directly. e.g. having a "time to first byte" metric could help to
> know how much time is introduced by the remote tier to start serving
> results to the consumer, or measuring how long it takes to return a
> response to consumers.
>
> 2. Couple of requests/nits on the metrics table, could you:
> - highlight the names (or have them on a separate column, as you prefer) to
> make it easier to read? If you choose to have another column, maybe sort
> them as "Name, Description, MBean" and adjust the width.
> - group the related metrics in separate groups, e.g. Lag, Remote Delete,
> Remote Log Aux State, Remote Log Size; so we can elaborate on why these set
> of metrics are needed. Maybe adding some examples on usage and how
> actionable they are as the ones shared in previous emails would be useful
> to keep as part of the KIP.
>
> 3. On Lag metrics:
> 3.1 I would suggest the following renames:
> - TotalRemoteRecordsLag -> RemoteCopyLagRecords
> - TotalRemoteBytesLag -> RemoteCopyLagBytes
> - DeleteRemoteLag -> RemoteDeleteLagRecords
> 3.2. I agree with Kamal that having a lag based on the number of segments
> would be useful to include. Segments could give a faster proxy to
> understand whether the lag is meaningful or not. e.g. if the number of
> records and bytes are high, but the segment lag is only small (e.g. 1), it
> may be ok; but if the number of segments is high, then it can be more
> relevant to operators.
> 3.3. Could we consider having the

Re: [DISCUSS] KIP-974 Docker Image for GraalVM based Native Kafka Broker

2023-10-30 Thread Krishna Agarwal
Thanks for the feedback.

I have updated the KIP with "kafka-native" as the accepted docker image
name.

Regards,
Krishna

On Sun, Oct 29, 2023 at 10:42 PM Ismael Juma  wrote:

> I think kafka-native is clearer. Over time, the graalvm images may be used
> for production too.
>
> Ismael
>
> On Sat, Oct 28, 2023, 11:52 PM Manikumar 
> wrote:
>
> > Thanks for the explanation. I am fine with using ""kafka-local" as the
> > image name.
> >
> > On Fri, Oct 27, 2023 at 11:47 AM Krishna Agarwal <
> > krishna0608agar...@gmail.com> wrote:
> >
> > > Hi Manikumar,
> > > Thanks for the feedback.
> > >
> > > This image signifies 2 things:
> > >
> > >1. Image should be used for the local development and testing
> purposes
> > >with fast startup times. (kafka-local)
> > >2. To achieve (1) - we are providing a native executable for Apache
> > >Kafka in the docker image. (kafka-native)
> > >
> > > While "graalvm" is the underlying tool enabling this, I'm unsure if we
> > > should explicitly mention it in the name.
> > > I'd love to hear your thoughts on this. Do you prefer "kafka-native"
> > > instead of "kafka-local"?
> > >
> > > Regards,
> > > Krishna
> > >
> > > On Fri, Oct 20, 2023 at 3:32 PM Manikumar 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > > For the native AK docker image, we are considering '*kafka-local*'
> as
> > > it
> > > > clearly signifies that this image is intended exclusively for local
> > > >
> > > > I am not sure, if there is any naming pattern for graalvm based
> images.
> > > Can
> > > > we include "graalvm" to the image name like "kafka-graalvm-native".
> > > > This will clearly indicate this is graalvm based image.
> > > >
> > > >
> > > > Thanks. Regards
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Oct 18, 2023 at 9:26 PM Krishna Agarwal <
> > > > krishna0608agar...@gmail.com> wrote:
> > > >
> > > > > Hi Federico,
> > > > > Thanks for the feedback and apologies for the delay.
> > > > >
> > > > > I've included a section in the KIP on the release process. I would
> > > > greatly
> > > > > appreciate your insights after reviewing it.
> > > > >
> > > > > Regards,
> > > > > Krishna
> > > > >
> > > > > On Fri, Sep 8, 2023 at 3:08 PM Federico Valeri <
> fedeval...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi Krishna, thanks for opening this discussion.
> > > > > >
> > > > > > I see you created two separate KIPs (974 and 975), but there are
> > some
> > > > > > common points (build system and test plan).
> > > > > >
> > > > > > Currently, the Docker image used for system tests is only
> supported
> > > in
> > > > > > that limited scope, so the maintenance burden is minimal.
> Providing
> > > > > > official Kafka images would be much more complicated. Have you
> > > > > > considered how the image rebuild process would work in case a
> high
> > > > > > severity CVE comes out for a non Kafka image dependency? In that
> > > case,
> > > > > > there will be no Kafka release.
> > > > > >
> > > > > > Br
> > > > > > Fede
> > > > > >
> > > > > > On Fri, Sep 8, 2023 at 9:17 AM Krishna Agarwal
> > > > > >  wrote:
> > > > > > >
> > > > > > > Hi,
> > > > > > > I want to submit a KIP to deliver an experimental Apache Kafka
> > > docker
> > > > > > image.
> > > > > > > The proposed docker image can launch brokers with sub-second
> > > startup
> > > > > time
> > > > > > > and minimal memory footprint by leveraging a GraalVM based
> native
> > > > Kafka
> > > > > > > binary.
> > > > > > >
> > > > > > > KIP-974: Docker Image for GraalVM based Native Kafka Broker
> > > > > > > <
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-974%3A+Docker+Image+for+GraalVM+based+Native+Kafka+Broker
> > > > > > >
> > > > > > >
> > > > > > > Regards,
> > > > > > > Krishna
> > > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-15755) LeaveGroupResponse v0-v2 should handle no members

2023-10-30 Thread Robert Wagner (Jira)
Robert Wagner created KAFKA-15755:
-

 Summary: LeaveGroupResponse v0-v2 should handle no members
 Key: KAFKA-15755
 URL: https://issues.apache.org/jira/browse/KAFKA-15755
 Project: Kafka
  Issue Type: Bug
Reporter: Robert Wagner


When Sarama and Librdkafka consumer clients issue LeaveGroup requests, they use 
an older protocol version < 3 which did not include a `members` field.

Since our upgrade the kafka broker 3.4.1 we have started seeing these broker 
exceptions:

{code}
[2023-10-24 01:17:17,214] ERROR [KafkaApi-28598] Unexpected error handling 
request RequestHeader(apiKey=LEAVE_GROUP, apiVersion=1, clientId=REDACTED, 
correlationId=116775, headerVersion=1) -- 
LeaveGroupRequestData(groupId=REDACTED, 
memberId='REDACTED-73967453-93c4-4f3f-bcef-32c1f280350f', members=[]) with 
context RequestContext(header=RequestHeader(apiKey=LEAVE_GROUP, apiVersion=1, 
clientId=REDACTED, correlationId=116775, headerVersion=1), 
connectionId='REDACTED', clientAddress=/REDACTED, principal=REDACTED, 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
clientInformation=ClientInformation(softwareName=confluent-kafka-python, 
softwareVersion=1.7.0-rdkafka-1.7.0), fromPrivilegedListener=false, 
principalSerde=Optional[REDACTED]) (kafka.server.KafkaApis)
java.util.concurrent.CompletionException: 
org.apache.kafka.common.errors.UnsupportedVersionException: LeaveGroup response 
version 1 can only contain one member, got 0 members.
at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:315)
at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:320)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:936)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:950)
at 
java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2340)
at kafka.server.KafkaApis.handleLeaveGroupRequest(KafkaApis.scala:1796)
at kafka.server.KafkaApis.handle(KafkaApis.scala:196)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.kafka.common.errors.UnsupportedVersionException: 
LeaveGroup response version 1 can only contain one member, got 0 members. {code}
 
KIP-848 introduced a check in LeaveGroupResponse that the members field must 
have 1 element.  In some error cases, it seems like the members field has 0 
elements - which would still be a valid response for v0-v2 messages, but this 
exception was being thrown.

Instead of throwing an exception in this case, continue with the 
LeaveGroupResponse, since it is not a field included in v0 - v2 responses 
anyway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-974 Docker Image for GraalVM based Native Kafka Broker

2023-10-30 Thread Krishna Agarwal
Hi Federico,
Thanks for the feedback.

   1. Yes, we will add the building, testing and scanning automation for
   this Docker Image along with the flow mentioned in KIP-975. (Updated in the
   KIP)
   2. Added the other alternatives to the "rejected alternatives" section,
   instead of the main sections. (Updated in the KIP)
   3. Regarding the release process- In the KIP-975, it was concluded that
   there shouldn't be any docker specific release process. If there is a high
   severity CVE, we should release a new version of Kafka for the affected
   branch. It would include the latest Kafka code from the branch. In my
   opinion we should keep the same release process here for consistency.
   (Updated in the KIP)
   KIP-975 Release Process:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka#KIP975:DockerImageforApacheKafka-ReleaseProcess
   Discussion thread for the same:
   https://lists.apache.org/thread/05t8ccvhp3fotfftgm7dzn8wobkl59l4


Regards,
Krishna

On Wed, Oct 25, 2023 at 9:50 PM Federico Valeri 
wrote:

> Hi Krishna, thanks for updating the KIP and all the work you are
> putting into that.
>
> The release process LGTM. In the other KIP I see that there will be
> some automation for building, testing and scanning for CVEs. Is this
> also true for native images?
>
> I see you are proposing to use Alpine as the base image. I would add
> Distroless to the rejected alternatives with the motivation. Maybe we
> can do the same for the GraalVM distribution of choice.
>
> On Fri, Oct 20, 2023 at 12:02 PM Manikumar 
> wrote:
> >
> > Hi,
> >
> > > For the native AK docker image, we are considering '*kafka-local*' as
> it
> > clearly signifies that this image is intended exclusively for local
> >
> > I am not sure, if there is any naming pattern for graalvm based images.
> Can
> > we include "graalvm" to the image name like "kafka-graalvm-native".
> > This will clearly indicate this is graalvm based image.
> >
> >
> > Thanks. Regards
> >
> >
> >
> >
> > On Wed, Oct 18, 2023 at 9:26 PM Krishna Agarwal <
> > krishna0608agar...@gmail.com> wrote:
> >
> > > Hi Federico,
> > > Thanks for the feedback and apologies for the delay.
> > >
> > > I've included a section in the KIP on the release process. I would
> greatly
> > > appreciate your insights after reviewing it.
> > >
> > > Regards,
> > > Krishna
> > >
> > > On Fri, Sep 8, 2023 at 3:08 PM Federico Valeri 
> > > wrote:
> > >
> > > > Hi Krishna, thanks for opening this discussion.
> > > >
> > > > I see you created two separate KIPs (974 and 975), but there are some
> > > > common points (build system and test plan).
> > > >
> > > > Currently, the Docker image used for system tests is only supported
> in
> > > > that limited scope, so the maintenance burden is minimal. Providing
> > > > official Kafka images would be much more complicated. Have you
> > > > considered how the image rebuild process would work in case a high
> > > > severity CVE comes out for a non Kafka image dependency? In that
> case,
> > > > there will be no Kafka release.
> > > >
> > > > Br
> > > > Fede
> > > >
> > > > On Fri, Sep 8, 2023 at 9:17 AM Krishna Agarwal
> > > >  wrote:
> > > > >
> > > > > Hi,
> > > > > I want to submit a KIP to deliver an experimental Apache Kafka
> docker
> > > > image.
> > > > > The proposed docker image can launch brokers with sub-second
> startup
> > > time
> > > > > and minimal memory footprint by leveraging a GraalVM based native
> Kafka
> > > > > binary.
> > > > >
> > > > > KIP-974: Docker Image for GraalVM based Native Kafka Broker
> > > > > <
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-974%3A+Docker+Image+for+GraalVM+based+Native+Kafka+Broker
> > > > >
> > > > >
> > > > > Regards,
> > > > > Krishna
> > > >
> > >
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2340

2023-10-30 Thread Apache Jenkins Server
See 




Re: Remaining tests that need to support KRaft

2023-10-30 Thread Sameer Tejani
Thanks, I forgot to put this in the bug description - can you add the label
kraft to your PR so that the larger team that reviews KRaft changes will
see it?  Thank you!

On Sun, Oct 29, 2023 at 7:21 PM ziming deng 
wrote:

> Hello Sameer, I have created a PR for some test in `kafka.api` and
> `kafka.network`, you can take a review when you are free, thanks.
>
>
> [image: 14595.png]
>
> MINOR: Enable kraft test in kafka.api and kafka.network by dengziming ·
> Pull Request #14595 · apache/kafka
> 
> github.com 
> 
>
>
> On Oct 30, 2023, at 07:26, Sameer Tejani 
> wrote:
>
> Hi everyone,
>
> I worked with Colin who had taken an initial pass at tests that still need
> to be converted to support KRaft.  I created individual Jiras
> <
> https://issues.apache.org/jira/issues/?filter=-4&jql=labels%20in%20(kraft-test)
> >
> for them and have marked them with labels kraft-test.  Some of them should
> be simple enough to implement.  We will need to have them all converted
> before AK 4.0 is released.
>
> Thx
>
> --
> - Sameer
>
>
>

-- 
- Sameer


[jira] [Resolved] (KAFKA-15753) KRaft support in BrokerApiVersionsCommandTest

2023-10-30 Thread Sameer Tejani (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sameer Tejani resolved KAFKA-15753.
---
Resolution: Duplicate

> KRaft support in BrokerApiVersionsCommandTest
> -
>
> Key: KAFKA-15753
> URL: https://issues.apache.org/jira/browse/KAFKA-15753
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in BrokerApiVersionsCommandTest in 
> core/src/test/scala/integration/kafka/admin/BrokerApiVersionsCommandTest.scala
>  need to be updated to support KRaft
> 50 : def checkBrokerApiVersionCommandOutput(): Unit = {
> Scanned 80 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15573) Implement auto-commit on partition assignment revocation

2023-10-30 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans resolved KAFKA-15573.

Resolution: Duplicate

> Implement auto-commit on partition assignment revocation
> 
>
> Key: KAFKA-15573
> URL: https://issues.apache.org/jira/browse/KAFKA-15573
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>
> When the group member's assignment changes and partitions are revoked and 
> auto-commit is enabled, we need to ensure that the commit request manager is 
> invoked to queue up the commits.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15539) Client should stop fetching while partitions being revoked

2023-10-30 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans resolved KAFKA-15539.

Resolution: Duplicate

> Client should stop fetching while partitions being revoked
> --
>
> Key: KAFKA-15539
> URL: https://issues.apache.org/jira/browse/KAFKA-15539
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-preview
>
> When partitions are being revoked (client received revocation on heartbeat 
> and is in the process of invoking the callback), we need to make sure we do 
> not fetch from those partitions anymore:
>  * no new fetches should be sent out for the partitions being revoked
>  * no fetch responses should be handled for those partitions (case where a 
> fetch was already in-flight when the partition revocation started.
> This does not seem to be handled in the current KafkaConsumer and the old 
> consumer protocol (only for the EAGER protocol). 
> Consider re-using the existing pendingRevocation logic that already exist in 
> the subscriptionState & used from the fetcher to determine if a partition is 
> fetchable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-967: Support custom SSL configuration for Kafka Connect RestServer

2023-10-30 Thread Chris Egerton
Hi Taras,

Thanks for the KIP! I have some feedback but ultimately I like this
proposal:

1. The "ssl.engine.factory.class" property was originally added for Kafka
brokers in KIP-519 [1]. It'd be nice to link to that KIP (possibly in a
"Background" section?) so that reviewers who don't have that context can
find it quickly without having to dig through commit histories in the code
base.

2. Can we clarify that the new "listeners.https.ssl.engine.factory.class"
property (and the way that the engine factory is configured with all
properties prefixed with "listeners.https.") will also affect MirrorMaker 2
clusters with the internal REST server introduced by KIP-710 [2] enabled?

3. We don't need to specify in the KIP that the
org.apache.kafka.connect.runtime.rest.util.SSLUtils class will be removed,
since that class isn't part of public API (i.e., nobody should be using
that class directly in external projects). If you're ever in doubt about
which classes are part of the public API for the project, you can check the
Javadocs [3]; if it's part of our public API, it should be included in
them. The same applies for changes to the
org.apache.kafka.common.security.ssl.SslFactory class.

4. The test plan includes an integration test for "Default SSL behavior and
compatibility"--is this necessary? Doesn't the
existing org.apache.kafka.connect.integration.RestForwardingIntegrationTest
give us sufficient coverage already? Similarly, the test plan includes an
integration test for "RestClient creation" and calls out
the RestForwardingIntegrationTest--don't we already create RestClient
instances in that test (like here [4])? It seems like this part of the KIP
may implicitly include tests that are already covered by the existing code
base, but if that's the case, it'd be nice to see this clarified as the
assumption is usually that items in the test plan cover changes that will
have to be implemented for the KIP.

5. There are several methods in the SslEngineFactory interface that don't
seem applicable for Kafka Connect (or MM2): shouldBeRebuilt(Map nextConfigs), reconfigurableConfigs(), and possibly keystore() and
truststore(). Does it make sense to require users to implement these? It
seems like a new interface may make more sense here.

[1] -
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
[2] -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-710%3A+Full+support+for+distributed+mode+in+dedicated+MirrorMaker+2.0+clusters
[3] - https://kafka.apache.org/36/javadoc/index.html?overview-summary.html
[4] -
https://github.com/apache/kafka/blob/9dbee599f13997effd8f7e278fd7256b850c8813/connect/runtime/src/test/java/org/apache/kafka/connect/integration/RestForwardingIntegrationTest.java#L161

Cheers,

Chris

On Thu, Oct 12, 2023 at 7:40 AM Taras Ledkov  wrote:

> Hi Ashwin,
>
> > I was referring to (and did not understand) the removal of L141 in
> clients/src/main/java/org/apache/kafka/common/security/ssl/SslFactory.java
> This line is moved to "new" private method `instantiateSslEngineFactory0
> `. Please take a look at the `SslFactory:L132` at the patch.
> Just dummy refactoring.
>
> > Yes, I think this class [SslEngineFactory] should be moved to something
> like `server-common` module - but would like any of the committers to
> comment on this.
> Sorry, not catch an idea.
> SslEngineFactory - public interface is placed at the 'clients' project. I
> don't know a more common place
>


Call for Presentations now open: Community over Code EU 2024

2023-10-30 Thread Ryan Skraba
(Note: You are receiving this because you are subscribed to the dev@
list for one or more projects of the Apache Software Foundation.)

It's back *and* it's new!

We're excited to announce that the first edition of Community over
Code Europe (formerly known as ApacheCon EU) which will be held at the
Radisson Blu Carlton Hotel in Bratislava, Slovakia from June 03-05,
2024! This eagerly anticipated event will be our first live EU
conference since 2019.

The Call for Presentations (CFP) for Community Over Code EU 2024 is
now open at https://eu.communityovercode.org/blog/cfp-open/,
and will close 2024/01/12 23:59:59 GMT.

We welcome submissions on any topic related to the Apache Software
Foundation, Apache projects, or the communities around those projects.
We are specifically looking for presentations in the following
categories:

* API & Microservices
* Big Data Compute
* Big Data Storage
* Cassandra
* CloudStack
* Community
* Data Engineering
* Fintech
* Groovy
* Incubator
* IoT
* Performance Engineering
* Search
* Tomcat, Httpd and other servers

Additionally, we are thrilled to introduce a new feature this year: a
poster session. This addition will provide an excellent platform for
showcasing high-level projects and incubator initiatives in a visually
engaging manner. We believe this will foster lively discussions and
facilitate networking opportunities among participants.

All my best, and thanks so much for your participation,

Ryan Skraba (on behalf of the program committee)

[Countdown]: https://www.timeanddate.com/countdown/to?iso=20240112T2359&p0=1440


[jira] [Resolved] (KAFKA-15631) Do not send new heartbeat request while another one in-flight

2023-10-30 Thread Philip Nee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Nee resolved KAFKA-15631.

Resolution: Not A Problem

> Do not send new heartbeat request while another one in-flight
> -
>
> Key: KAFKA-15631
> URL: https://issues.apache.org/jira/browse/KAFKA-15631
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Philip Nee
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>
> Client consumer should not send a new heartbeat request while there is a 
> previous in-flight. If a HB is in-flight, we should wait for a response or 
> timeout before sending a next one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15628) Refactor ConsumerRebalanceListener invocation for reuse

2023-10-30 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15628.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Refactor ConsumerRebalanceListener invocation for reuse
> ---
>
> Key: KAFKA-15628
> URL: https://issues.apache.org/jira/browse/KAFKA-15628
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Affects Versions: 3.7.0
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> Pull out the code related to invoking {{ConsumerRebalanceListener}} methods 
> into its own class so that it can be reused by the KIP-848 implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Add note about KAFKA-15653 [kafka-site]

2023-10-30 Thread via GitHub


jolshan merged PR #564:
URL: https://github.com/apache/kafka-site/pull/564


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-15643) Improve unloading logging

2023-10-30 Thread Ritika Muduganti (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ritika Muduganti resolved KAFKA-15643.
--
  Reviewer: David Jacot
Resolution: Fixed

> Improve unloading logging
> -
>
> Key: KAFKA-15643
> URL: https://issues.apache.org/jira/browse/KAFKA-15643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Ritika Muduganti
>Priority: Major
>
> When a new leader is elected for a __consumer_offset partition, the followers 
> are notified to unload the state. However, only the former leader is aware of 
> it. The remaining follower prints out the following error:
> ERROR [GroupCoordinator id=1] Execution of 
> UnloadCoordinator(tp=__consumer_offsets-1, epoch=0) failed due to This is not 
> the correct coordinator.. 
> (org.apache.kafka.coordinator.group.runtime.CoordinatorRuntime)
> The error is actually correct but we should improve the logging to not print 
> anything when in the remaining follower case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-4852) ByteBufferSerializer not compatible with offsets

2023-10-30 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-4852:

  Assignee: (was: LinShunkang)

> ByteBufferSerializer not compatible with offsets
> 
>
> Key: KAFKA-4852
> URL: https://issues.apache.org/jira/browse/KAFKA-4852
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.1
> Environment: all
>Reporter: Werner Daehn
>Priority: Minor
> Fix For: 3.4.0
>
>
> Quick intro: A ByteBuffer.rewind() resets the position to zero. What if the 
> ByteBuffer was created with an offset? new ByteBuffer(data, 3, 10)? The 
> ByteBufferSerializer will send from pos=0 and not from pos=3 onwards.
> Solution: No rewind() but flip() for reading a ByteBuffer. That's what the 
> flip is meant for.
> Story:
> Imagine the incoming data comes from a byte[], e.g. a network stream 
> containing topicname, partition, key, value, ... and you want to create a new 
> ProducerRecord for that. As the constructor of ProducerRecord requires 
> (topic, partition, key, value) you have to copy from above byte[] the key and 
> value. That means there is a memcopy taking place. Since the payload can be 
> potentially large, that introduces a lot of overhead. Twice the memory.
> A nice solution to this problem is to simply wrap the network byte[] into new 
> ByteBuffers:
> ByteBuffer key = ByteBuffer.wrap(data, keystart, keylength);
> ByteBuffer value = ByteBuffer.wrap(data, valuestart, valuelength);
> and then use the ByteBufferSerializer instead of the ByteArraySerializer.
> But that does not work as the ByteBufferSerializer does a rewind(), hence 
> both, key and value, will start at position=0 of the data[].
> public class ByteBufferSerializer implements Serializer {
> public byte[] serialize(String topic, ByteBuffer data) {
>  data.rewind();



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15602) Breaking change in 3.4.0 ByteBufferSerializer

2023-10-30 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15602.
-
Fix Version/s: 3.4.2
   3.5.2
   3.7.0
   3.6.1
 Assignee: Matthias J. Sax
   Resolution: Fixed

As discussed, reverted this in all applicable branches.

> Breaking change in 3.4.0 ByteBufferSerializer
> -
>
> Key: KAFKA-15602
> URL: https://issues.apache.org/jira/browse/KAFKA-15602
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 3.4.0, 3.5.0, 3.4.1, 3.6.0, 3.5.1
>Reporter: Luke Kirby
>Assignee: Matthias J. Sax
>Priority: Critical
> Fix For: 3.4.2, 3.5.2, 3.7.0, 3.6.1
>
>
> [This PR|https://github.com/apache/kafka/pull/12683/files] claims to have 
> solved the situation described by KAFKA-4852, namely, to have 
> ByteBufferSerializer respect ByteBuffers wrapping byte arrays with non-0 
> offsets (or, put another way, to honor the buffer's position() as the start 
> point to consume bytes from). Unfortunately, it failed to actually do this, 
> and instead changed the expectations for how an input ByteBuffer's limit and 
> position should be set before being provided to send() on a producer 
> configured with ByteBufferSerializer. Code that worked with pre-3.4.0 
> releases now produce 0-length messages instead of the intended messages, 
> effectively introducing a breaking change for existing users of the 
> serializer in the wild.
> Here are a few different inputs and serialized outputs under pre-3.4.0 and 
> 3.4.0+ to summarize the breaking change:
> ||buffer argument||3.3.2 serialized output||3.4.0+ serialized output||
> |ByteBuffer.wrap("test".getBytes(UTF_8))|len=4 
> val=test|len=4 val=test|
> |ByteBuffer.allocate(8).put("test".getBytes(UTF_8)).flip()|len=4 
> val=test|len=0 val=|
> |ByteBuffer.allocate(8).put("test".getBytes(UTF_8))|len=8 
> val=test<0><0><0><0>|len=4 val=test|
> |ByteBuffer buff = ByteBuffer.allocate(8).put("test".getBytes(UTF_8));
> buff.limit(buff.position());|len=4 
> val=test|len=4 val=test|
> |ByteBuffer.wrap("test".getBytes(UTF_8), 1, 3)|len=4 val=test|len=1 val=t|
> Notably, plain-wrappers of byte arrays continue to work under both versions 
> due to the special case in the serializer for them. I suspect that this is 
> the dominant use-case, which is why this has apparently gone un-reported to 
> this point. The wrapped-with-offset case fails for both cases for different 
> reasons (the expected value would be "est"). As demonstrated here, you can 
> ensure that a manually assembled ByteBuffer will work under both versions by 
> ensuring that your buffers start have position == limit == message-length 
> (and an actual desired start position of 0). Clearly, though, behavior has 
> changed dramatically for the second and third case there, with the 3.3.2 
> behavior, in my experience, aligning better with naive expectations.
> [Previously|https://github.com/apache/kafka/blob/35a0de32ee3823dfb548a1cd5d5faf4f7c99e4e0/clients/src/main/java/org/apache/kafka/common/serialization/ByteBufferSerializer.java],
>  the serializer would just rewind() the buffer and respect the limit as the 
> indicator as to how much data was in the buffer. So, essentially, the 
> prevailing contract was that the data from position 0 (always!) up to the 
> limit on the buffer would be serialized; so it was really just the limit that 
> was honored. So if, per the original issue, you have a byte[] array wrapped 
> with, say, ByteBuffer.wrap(bytes, 3, 5) then that will yield a ByteBuffer() 
> with position = 3 indicating the desired start point to read from, but 
> effectively ignored by the serializer due to the rewind().
> So while the serializer didn't work when presenting a ByteBuffer view onto a 
> sub-view of a backing array, it did however follow expected behavior when 
> employing standard patterns to populate ByteBuffers backed by 
> larger-than-necessary arrays and using limit() to identify the end of actual 
> data, consistent with conventional usage of flip() to switch from writing to 
> a buffer to setting it up to be read from (e.g., to be passed into a 
> producer.send() call). E.g.,
> {code:java}
> ByteBuffer bb = ByteBuffer.allocate(TOO_MUCH);
> ... // some sequence of 
> bb.put(...); // populate buffer with some number of bytes less than TOO_MUCH 
> ... 
> bb.flip(); /* logically, this says "I am done writing, let's set this up for 
> reading"; pragmatically, it sets the limit to the current position so that 
> whoever reads the buffer knows when to stop reading, and sets the position to 
> zero so it knows where to start reading from */ 
> producer.send(bb); {code}
> Technically, you wouldn't even need to use flip() there, since position i

Re: [DISCUSS] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-10-30 Thread Hanyu (Peter) Zheng
Hi, Matthias,
Now, if we use TimestampedKeyQuery to query kv-store, it will throw a
exception,
the exception like this:
java.lang.IllegalArgumentException: Cannot get result for failed query.

Sincerely,
Hanyu

On Thu, Oct 26, 2023 at 3:45 PM Hao Li  wrote:

> Thanks for the KIP Hanyu! One question: why not return an iterator of
> `ValueAndTimestamp` for `TimestampedKeyQuery`? I suppose for a
> ts-kv-store, there could be multiple timestamps associated with the same
> key?
>
> Hao
>
> On Thu, Oct 26, 2023 at 10:23 AM Matthias J. Sax  wrote:
>
> > Would we really get a ClassCastException?
> >
> >  From my understanding, the store would reject the query as unsupported
> > and thus the returned `QueryResult` object would have it's internal flag
> > set to indicate the failure, but no exception would be thrown directly?
> >
> > (Of course, there might be an exception thrown to the user if they don't
> > check `isSuccess()` flag but call `getResult()` directly.)
> >
> >
> > -Matthias
> >
> > On 10/25/23 8:55 AM, Hanyu (Peter) Zheng wrote:
> > > Hi, Bill,
> > > Thank you for your reply. Yes, now, if a user executes a timestamped
> > query
> > > against a non-timestamped store, It will throw ClassCastException.
> > > If a user uses KeyQuery to query kv-store or ts-kv-store, it always
> > return
> > > V.  If a user uses TimestampedKeyQuery to query kv-store, it will
> throw a
> > > exception, so TimestampedKeyQuery query can only query ts-kv-store and
> > > return ValueAndTimestamp object in the end.
> > >
> > > Sincerely,
> > > Hanyu
> > >
> > > On Wed, Oct 25, 2023 at 8:51 AM Hanyu (Peter) Zheng <
> pzh...@confluent.io
> > >
> > > wrote:
> > >
> > >> Thank you Lucas,
> > >>
> > >> I will fix the capitalization.
> > >> When a user executes a timestamped query against a non-timestamped
> > store,
> > >> It will throw ClassCastException.
> > >>
> > >> Sincerely,
> > >> Hanyu
> > >>
> > >> On Tue, Oct 24, 2023 at 1:36 AM Lucas Brutschy
> > >>  wrote:
> > >>
> > >>> Hi Hanyu,
> > >>>
> > >>> reading the KIP, I was wondering the same thing as Bill.
> > >>>
> > >>> Other than that, this looks good to me. Thanks for KIP.
> > >>>
> > >>> nit: you have method names `LowerBound` and `UpperBound`, where you
> > >>> probably want to fix the capitalization.
> > >>>
> > >>> Cheers,
> > >>> Lucas
> > >>>
> > >>> On Mon, Oct 23, 2023 at 5:46 PM Bill Bejeck 
> wrote:
> > 
> >  Hey Hanyu,
> > 
> >  Thanks for the KIP, it's a welcomed addition.
> >  Overall, the KIP looks good to me, I just have one comment.
> > 
> >  Can you discuss the expected behavior when a user executes a
> > timestamped
> >  query against a non-timestamped store?  I think it should throw an
> >  exception vs. using some default value.
> >  If it's the case that Kafka Stream wraps all stores in a
> >  `TimestampAndValue` store and returning a plain `V` or a
> >  `TimestampAndValue` object depends on the query type, then it
> would
> > >>> be
> >  good to add those details to the KIP.
> > 
> >  Thanks,
> >  Bill
> > 
> > 
> > 
> >  On Fri, Oct 20, 2023 at 5:07 PM Hanyu (Peter) Zheng
> >   wrote:
> > 
> > > Thank you Matthias,
> > >
> > > I will modify the KIP to eliminate this restriction.
> > >
> > > Sincerely,
> > > Hanyu
> > >
> > > On Fri, Oct 20, 2023 at 2:04 PM Hanyu (Peter) Zheng <
> > >>> pzh...@confluent.io>
> > > wrote:
> > >
> > >> Thank you Alieh,
> > >>
> > >> In these two new query types, I will remove 'get' from all getter
> > >>> method
> > >> names.
> > >>
> > >> Sincerely,
> > >> Hanyu
> > >>
> > >> On Fri, Oct 20, 2023 at 10:40 AM Matthias J. Sax <
> mj...@apache.org>
> > > wrote:
> > >>
> > >>> Thanks for the KIP Hanyu,
> > >>>
> > >>> One questions:
> > >>>
> >  To address this inconsistency, we propose that KeyQuery  should
> > >>> be
> > >>> restricted to querying kv-stores  only, ensuring that it always
> > >>> returns
> > > a
> > >>> plain V  type, making the behavior of the aforementioned code
> more
> > >>> predictable. Similarly, RangeQuery  should be dedicated to
> querying
> > >>> kv-stores , consistently returning only the plain V .
> > >>>
> > >>> Why do you want to restrict `KeyQuery` and `RangeQuery` to
> > >>> kv-stores? I
> > >>> think it would be possible to still allow both queries for
> > >>> ts-kv-stores,
> > >>> but change the implementation to return "plain V" instead of
> > >>> `ValueAndTimestamp`, ie, the implementation would
> automatically
> > >>> unwrap the value.
> > >>>
> > >>>
> > >>>
> > >>> -Matthias
> > >>>
> > >>> On 10/20/23 2:32 AM, Alieh Saeedi wrote:
> >  Hey Hanyu,
> > 
> >  Thanks for the KIP. It seems good to me.
> >  Just one point: AFAIK, we are going to remove "get" from the
> > >>> name of
> > > all
> 

Re: [DISCUSS] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-10-30 Thread Hanyu (Peter) Zheng
Hi, Hao,

For TimestampedKeyQuery, it only returns the value of the key, and the
value should be ValueAndTimestamp.
If you want to get an  iterator of `ValueAndTimestamp`, you can use
TimestampedRangeQuery.

Sincerely,
Hanyu

On Mon, Oct 30, 2023 at 1:42 PM Hanyu (Peter) Zheng 
wrote:

> Hi, Matthias,
> Now, if we use TimestampedKeyQuery to query kv-store, it will throw a
> exception,
> the exception like this:
> java.lang.IllegalArgumentException: Cannot get result for failed query.
>
> Sincerely,
> Hanyu
>
> On Thu, Oct 26, 2023 at 3:45 PM Hao Li  wrote:
>
>> Thanks for the KIP Hanyu! One question: why not return an iterator of
>> `ValueAndTimestamp` for `TimestampedKeyQuery`? I suppose for a
>> ts-kv-store, there could be multiple timestamps associated with the same
>> key?
>>
>> Hao
>>
>> On Thu, Oct 26, 2023 at 10:23 AM Matthias J. Sax 
>> wrote:
>>
>> > Would we really get a ClassCastException?
>> >
>> >  From my understanding, the store would reject the query as unsupported
>> > and thus the returned `QueryResult` object would have it's internal flag
>> > set to indicate the failure, but no exception would be thrown directly?
>> >
>> > (Of course, there might be an exception thrown to the user if they don't
>> > check `isSuccess()` flag but call `getResult()` directly.)
>> >
>> >
>> > -Matthias
>> >
>> > On 10/25/23 8:55 AM, Hanyu (Peter) Zheng wrote:
>> > > Hi, Bill,
>> > > Thank you for your reply. Yes, now, if a user executes a timestamped
>> > query
>> > > against a non-timestamped store, It will throw ClassCastException.
>> > > If a user uses KeyQuery to query kv-store or ts-kv-store, it always
>> > return
>> > > V.  If a user uses TimestampedKeyQuery to query kv-store, it will
>> throw a
>> > > exception, so TimestampedKeyQuery query can only query ts-kv-store and
>> > > return ValueAndTimestamp object in the end.
>> > >
>> > > Sincerely,
>> > > Hanyu
>> > >
>> > > On Wed, Oct 25, 2023 at 8:51 AM Hanyu (Peter) Zheng <
>> pzh...@confluent.io
>> > >
>> > > wrote:
>> > >
>> > >> Thank you Lucas,
>> > >>
>> > >> I will fix the capitalization.
>> > >> When a user executes a timestamped query against a non-timestamped
>> > store,
>> > >> It will throw ClassCastException.
>> > >>
>> > >> Sincerely,
>> > >> Hanyu
>> > >>
>> > >> On Tue, Oct 24, 2023 at 1:36 AM Lucas Brutschy
>> > >>  wrote:
>> > >>
>> > >>> Hi Hanyu,
>> > >>>
>> > >>> reading the KIP, I was wondering the same thing as Bill.
>> > >>>
>> > >>> Other than that, this looks good to me. Thanks for KIP.
>> > >>>
>> > >>> nit: you have method names `LowerBound` and `UpperBound`, where you
>> > >>> probably want to fix the capitalization.
>> > >>>
>> > >>> Cheers,
>> > >>> Lucas
>> > >>>
>> > >>> On Mon, Oct 23, 2023 at 5:46 PM Bill Bejeck 
>> wrote:
>> > 
>> >  Hey Hanyu,
>> > 
>> >  Thanks for the KIP, it's a welcomed addition.
>> >  Overall, the KIP looks good to me, I just have one comment.
>> > 
>> >  Can you discuss the expected behavior when a user executes a
>> > timestamped
>> >  query against a non-timestamped store?  I think it should throw an
>> >  exception vs. using some default value.
>> >  If it's the case that Kafka Stream wraps all stores in a
>> >  `TimestampAndValue` store and returning a plain `V` or a
>> >  `TimestampAndValue` object depends on the query type, then it
>> would
>> > >>> be
>> >  good to add those details to the KIP.
>> > 
>> >  Thanks,
>> >  Bill
>> > 
>> > 
>> > 
>> >  On Fri, Oct 20, 2023 at 5:07 PM Hanyu (Peter) Zheng
>> >   wrote:
>> > 
>> > > Thank you Matthias,
>> > >
>> > > I will modify the KIP to eliminate this restriction.
>> > >
>> > > Sincerely,
>> > > Hanyu
>> > >
>> > > On Fri, Oct 20, 2023 at 2:04 PM Hanyu (Peter) Zheng <
>> > >>> pzh...@confluent.io>
>> > > wrote:
>> > >
>> > >> Thank you Alieh,
>> > >>
>> > >> In these two new query types, I will remove 'get' from all getter
>> > >>> method
>> > >> names.
>> > >>
>> > >> Sincerely,
>> > >> Hanyu
>> > >>
>> > >> On Fri, Oct 20, 2023 at 10:40 AM Matthias J. Sax <
>> mj...@apache.org>
>> > > wrote:
>> > >>
>> > >>> Thanks for the KIP Hanyu,
>> > >>>
>> > >>> One questions:
>> > >>>
>> >  To address this inconsistency, we propose that KeyQuery  should
>> > >>> be
>> > >>> restricted to querying kv-stores  only, ensuring that it always
>> > >>> returns
>> > > a
>> > >>> plain V  type, making the behavior of the aforementioned code
>> more
>> > >>> predictable. Similarly, RangeQuery  should be dedicated to
>> querying
>> > >>> kv-stores , consistently returning only the plain V .
>> > >>>
>> > >>> Why do you want to restrict `KeyQuery` and `RangeQuery` to
>> > >>> kv-stores? I
>> > >>> think it would be possible to still allow both queries for
>> > >>> ts-kv-stores,
>> > >>> but change the implementation to return "pl

[jira] [Created] (KAFKA-15756) Migrate existing integration tests to run old protocol in new coordinator

2023-10-30 Thread Dongnuo Lyu (Jira)
Dongnuo Lyu created KAFKA-15756:
---

 Summary: Migrate existing integration tests to run old protocol in 
new coordinator
 Key: KAFKA-15756
 URL: https://issues.apache.org/jira/browse/KAFKA-15756
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongnuo Lyu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15757) Do not advertise v4 AddPartitionsToTxn to clients

2023-10-30 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15757:
--

 Summary: Do not advertise v4 AddPartitionsToTxn to clients
 Key: KAFKA-15757
 URL: https://issues.apache.org/jira/browse/KAFKA-15757
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan


v4+ is intended to be a broker side API. Thus, we should not return it as a 
valid version to clients.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2341

2023-10-30 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15758) Always schedule wrapped callbacks

2023-10-30 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15758:
--

 Summary: Always schedule wrapped callbacks
 Key: KAFKA-15758
 URL: https://issues.apache.org/jira/browse/KAFKA-15758
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan


As part of 
[https://github.com/apache/kafka/commit/08aa33127a4254497456aa7a0c1646c7c38adf81]
 the finding of the coordinator was moved to the AddPartitionsToTxnManager. In 
the case of an error, we return the error on the wrapped callback. 

This seemed to cause issues in the tests and we realized that executing the 
callback directly and not rescheduling it on the request channel seemed to 
resolve some issues. 

One theory was that scheduling the callback before the request returned caused 
issues.

Ideally we wouldn't have this special handling. This ticket is to remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #103

2023-10-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 288482 lines...]

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldProcessTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldProcessTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateStreamTime() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateStreamTime() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldShutdownTaskExecutor() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldShutdownTaskExecutor() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateSystemTime() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateSystemTime() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
shouldRespectProcessingDisabledByTaskExecutionMetadata() STARTED

Exception: java.lang.AssertionError thrown from the UncaughtExceptionHandler in 
thread "TaskExecutor"

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
shouldRespectProcessingDisabledByTaskExecutionMetadata() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > StateQueryResultTest 
> Zero query results shouldn't error STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > StateQueryResultTest 
> Zero query results shouldn't error PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > StateQueryResultTest 
> Valid query results still works STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > StateQueryResultTest 
> Valid query results still works PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@1d283046, 
org.apache.kafka.test.MockInternalProcessorContext@5d43178b STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@1d283046, 
org.apache.kafka.test.MockInternalProcessorContext@5d43178b PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@580b36df, 
org.apache.kafka.test.MockInternalProcessorContext@17097f50 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@580b36df, 
org.apache.kafka.test.MockInternalProcessorContext@17097f50 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@71bde139, 
org.apache.kafka.test.MockInternalProcessorContext@29c85409 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@71bde139, 
org.apache.kafka.test.MockInternalProcessorContext@29c85409 PASSED

Gradle Test

[jira] [Created] (KAFKA-15759) DescribeClusterRequestTest is flaky

2023-10-30 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-15759:
--

 Summary: DescribeClusterRequestTest is flaky
 Key: KAFKA-15759
 URL: https://issues.apache.org/jira/browse/KAFKA-15759
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Calvin Liu


testDescribeClusterRequestIncludingClusterAuthorizedOperations(String).quorum=kraft
 – kafka.server.DescribeClusterRequestTest
{code:java}
org.opentest4j.AssertionFailedError: expected: 
 but was:  at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at 
org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)   at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)   at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)   at 
org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)  at 
kafka.server.DescribeClusterRequestTest.$anonfun$testDescribeClusterRequest$4(DescribeClusterRequestTest.scala:99)
   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
kafka.server.DescribeClusterRequestTest.testDescribeClusterRequest(DescribeClusterRequestTest.scala:86)
  at 
kafka.server.DescribeClusterRequestTest.testDescribeClusterRequestIncludingClusterAuthorizedOperations(DescribeClusterRequestTest.scala:53)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15760) org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated is flaky

2023-10-30 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-15760:
--

 Summary: 
org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated
 is flaky
 Key: KAFKA-15760
 URL: https://issues.apache.org/jira/browse/KAFKA-15760
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Calvin Liu


Build / JDK 17 and Scala 2.13 / testTaskRequestWithOldStartMsGetsUpdated() – 
org.apache.kafka.trogdor.coordinator.CoordinatorTest
{code:java}
java.util.concurrent.TimeoutException: 
testTaskRequestWithOldStartMsGetsUpdated() timed out after 12 milliseconds  
 at 
org.junit.jupiter.engine.extension.TimeoutExceptionFactory.create(TimeoutExceptionFactory.java:29)
   at 
org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:58)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
   at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
   at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
   at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15761) ConnectorRestartApiIntegrationTest.testMultiWorkerRestartOnlyConnector is flaky

2023-10-30 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-15761:
--

 Summary: 
ConnectorRestartApiIntegrationTest.testMultiWorkerRestartOnlyConnector is flaky
 Key: KAFKA-15761
 URL: https://issues.apache.org/jira/browse/KAFKA-15761
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Calvin Liu


Build / JDK 21 and Scala 2.13 / testMultiWorkerRestartOnlyConnector – 
org.apache.kafka.connect.integration.ConnectorRestartApiIntegrationTest
{code:java}
java.lang.AssertionError: Failed to stop connector and tasks within 12ms
at org.junit.Assert.fail(Assert.java:89)at 
org.junit.Assert.assertTrue(Assert.java:42)  at 
org.apache.kafka.connect.integration.ConnectorRestartApiIntegrationTest.runningConnectorAndTasksRestart(ConnectorRestartApiIntegrationTest.java:273)
 at 
org.apache.kafka.connect.integration.ConnectorRestartApiIntegrationTest.testMultiWorkerRestartOnlyConnector(ConnectorRestartApiIntegrationTest.java:231)
 at 
java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)   at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
   at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)   
 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)  at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)  at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)  at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
   at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15762) ClusterConnectionStatesTest.testSingleIP is flaky

2023-10-30 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-15762:
--

 Summary: ClusterConnectionStatesTest.testSingleIP is flaky
 Key: KAFKA-15762
 URL: https://issues.apache.org/jira/browse/KAFKA-15762
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Calvin Liu


Build / JDK 11 and Scala 2.13 / testSingleIP() – 
org.apache.kafka.clients.ClusterConnectionStatesTest
{code:java}
org.opentest4j.AssertionFailedError: expected: <1> but was: <2> at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
   at 
app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)  at 
app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:527)  at 
app//org.apache.kafka.clients.ClusterConnectionStatesTest.testSingleIP(ClusterConnectionStatesTest.java:267)
 at 
java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
 Method) at 
java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
java.base@11.0.16.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.base@11.0.16.1/java.lang.reflect.Method.invoke(Method.java:566) 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15754) The kafka-storage tool can generate UUID starting with "-"

2023-10-30 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-15754.
--
Resolution: Invalid

kafka-storage tool can not, in fact, generate uuids starting with '-'

> The kafka-storage tool can generate UUID starting with "-"
> --
>
> Key: KAFKA-15754
> URL: https://issues.apache.org/jira/browse/KAFKA-15754
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
>
> Using the kafka-storage.sh tool, it seems that it can still generate a UUID 
> starting with a dash "-", which then breaks how the argparse4j library works. 
> With such an UUID (i.e. -rmdB0m4T4–Y4thlNXk4Q in my case) the tool exits with 
> the following error:
> kafka-storage: error: argument --cluster-id/-t: expected one argument
> Said that, it seems that this problem was already addressed in the 
> Uuid.randomUuid method which keeps generating a new UUID until it doesn't 
> start with "-". This is the commit addressing it 
> [https://github.com/apache/kafka/commit/5c1dd493d6f608b566fdad5ab3a896cb13622bce]
> The problem is that when the toString is called on the Uuid instance, it's 
> going to do a Base64 encoding on the generated UUID this way:
> {code:java}
> Base64.getUrlEncoder().withoutPadding().encodeToString(getBytesFromUuid()); 
> {code}
> Not sure why, but the code is using an URL (safe) encoder which, taking a 
> look at the Base64 class in Java, is using a RFC4648_URLSAFE encoder using 
> the following alphabet:
>  
> {code:java}
> private static final char[] toBase64URL = new char[]{'A', 'B', 'C', 'D', 'E', 
> 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 
> 'U', 'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 
> 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 
> 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '-', '_'}; {code}
> which as you can see includes the "-" character.
> So despite the current Uuid.randomUuid is avoiding the generation of a UUID 
> containing a dash, the Base64 encoding operation can return a final UUID 
> starting with the dash instead.
>  
> I was wondering if there is any good reason for using a Base64 URL encoder 
> and not just the RFC4648 (not URL safe) which uses the common Base64 alphabet 
> not containing the "-".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #91

2023-10-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 564941 lines...]
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.5/streams/quickstart/test-streams-archetype/streams.examples
[Pipeline] {
[Pipeline] sh
+ mvn compile
[INFO] Scanning for projects...
[INFO] 
[INFO] -< streams.examples:streams.examples >--
[INFO] Building Kafka Streams Quickstart :: Java 0.1
[INFO]   from pom.xml
[INFO] [ jar ]-
[INFO] 
[INFO] --- resources:3.3.1:resources (default-resources) @ streams.examples ---
[INFO] Copying 1 resource from src/main/resources to target/classes
[INFO] 
[INFO] --- compiler:3.1:compile (default-compile) @ streams.examples ---

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerLeft[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterInner[caching
 enabled = true] STARTED
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 3 source files to 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.5/streams/quickstart/test-streams-archetype/streams.examples/target/classes
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time:  7.925 s
[INFO] Finished at: 2023-10-31T00:41:21Z
[INFO] 
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterInner[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterOuter[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterOuter[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerWithRightVersionedOnly[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerWithRightVersionedOnly[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftWithLeftVersionedOnly[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftWithLeftVersionedOnly[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerWithLeftVersionedOnly[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerWithLeftVersionedOnly[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftInner[caching
 enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 185 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.test

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2342

2023-10-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #171

2023-10-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 527827 lines...]
Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testCreateTopLevelPaths() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetLogConfigs() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetLogConfigs() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testAclMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testAclMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testConditionalUpdatePath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testConditionalUpdatePath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testDeleteTopicZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testDeleteTopicZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testDeletePath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testDeletePath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetBrokerMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetBrokerMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testJuteMaxBufffer() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testJuteMaxBufffer() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[1] STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[1] PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[2] STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[2] PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testRegisterBrokerInfo() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 168 > 
KafkaZkClientTest > testRegisterBrokerInfo() PASSED

Grad

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2343

2023-10-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 436513 lines...]
Progress (1): 2.3/5.6 MB
Progress (1): 2.3/5.6 MB
Progress (1): 2.3/5.6 MB
Progress (1): 2.3/5.6 MB
Progress (1): 2.3/5.6 MB
Progress (1): 2.4/5.6 MB
Progress (1): 2.4/5.6 MB
Progress (1): 2.4/5.6 MB
Progress (1): 2.4/5.6 MB
Progress (1): 2.4/5.6 MB
Progress (1): 2.4/5.6 MB
Progress (1): 2.5/5.6 MB
Progress (1): 2.5/5.6 MB
Progress (1): 2.5/5.6 MB
Progress (1): 2.5/5.6 MB
Progress (1): 2.5/5.6 MB
Progress (1): 2.5/5.6 MB
Progress (1): 2.6/5.6 MB
Progress (1): 2.6/5.6 MB
Progress (1): 2.6/5.6 MB
Progress (1): 2.6/5.6 MB
Progress (1): 2.6/5.6 MB
Progress (1): 2.6/5.6 MB
Progress (1): 2.6/5.6 MB
Progress (1): 2.7/5.6 MB
Progress (1): 2.7/5.6 MB
Progress (1): 2.7/5.6 MB
Progress (1): 2.7/5.6 MB
Progress (1): 2.7/5.6 MB
Progress (1): 2.7/5.6 MB
Progress (1): 2.8/5.6 MB
Progress (1): 2.8/5.6 MB
Progress (1): 2.8/5.6 MB
Progress (1): 2.8/5.6 MB
Progress (1): 2.8/5.6 MB
Progress (1): 2.8/5.6 MB
Progress (1): 2.9/5.6 MB
Progress (1): 2.9/5.6 MB
Progress (1): 2.9/5.6 MB
Progress (1): 2.9/5.6 MB
Progress (1): 2.9/5.6 MB
Progress (1): 2.9/5.6 MB
Progress (1): 3.0/5.6 MB
Progress (1): 3.0/5.6 MB
Progress (1): 3.0/5.6 MB
Progress (1): 3.0/5.6 MB
Progress (1): 3.0/5.6 MB
Progress (1): 3.0/5.6 MB
Progress (1): 3.0/5.6 MB
Progress (1): 3.1/5.6 MB
Progress (1): 3.1/5.6 MB
Progress (1): 3.1/5.6 MB
Progress (1): 3.1/5.6 MB
Progress (1): 3.1/5.6 MB
Progress (1): 3.1/5.6 MB
Progress (1): 3.2/5.6 MB
Progress (1): 3.2/5.6 MB
Progress (1): 3.2/5.6 MB
Progress (1): 3.2/5.6 MB
Progress (1): 3.2/5.6 MB
Progress (1): 3.2/5.6 MB
Progress (1): 3.3/5.6 MB
Progress (1): 3.3/5.6 MB
Progress (1): 3.3/5.6 MB
Progress (1): 3.3/5.6 MB
Progress (1): 3.3/5.6 MB
Progress (1): 3.3/5.6 MB
Progress (1): 3.4/5.6 MB
Progress (1): 3.4/5.6 MB
Progress (1): 3.4/5.6 MB
Progress (1): 3.4/5.6 MB
Progress (1): 3.4/5.6 MB
Progress (1): 3.4/5.6 MB
Progress (1): 3.4/5.6 MB
Progress (1): 3.5/5.6 MB
Progress (1): 3.5/5.6 MB
Progress (1): 3.5/5.6 MB
Progress (1): 3.5/5.6 MB
Progress (1): 3.5/5.6 MB
Progress (1): 3.5/5.6 MB
Progress (1): 3.6/5.6 MB
Progress (1): 3.6/5.6 MB
Progress (1): 3.6/5.6 MB
Progress (1): 3.6/5.6 MB
Progress (1): 3.6/5.6 MB
Progress (1): 3.6/5.6 MB
Progress (1): 3.7/5.6 MB
Progress (1): 3.7/5.6 MB
Progress (1): 3.7/5.6 MB
Progress (1): 3.7/5.6 MB
Progress (1): 3.7/5.6 MB
Progress (1): 3.7/5.6 MB
Progress (1): 3.8/5.6 MB
Progress (1): 3.8/5.6 MB
Progress (1): 3.8/5.6 MB
Progress (1): 3.8/5.6 MB
Progress (1): 3.8/5.6 MB
Progress (1): 3.8/5.6 MB
Progress (1): 3.8/5.6 MB
Progress (1): 3.9/5.6 MB
Progress (1): 3.9/5.6 MB
Progress (1): 3.9/5.6 MB
Progress (1): 3.9/5.6 MB
Progress (1): 3.9/5.6 MB
Progress (1): 3.9/5.6 MB
Progress (1): 4.0/5.6 MB
Progress (1): 4.0/5.6 MB
Progress (1): 4.0/5.6 MB
Progress (1): 4.0/5.6 MB
Progress (1): 4.0/5.6 MB
Progress (1): 4.0/5.6 MB
Progress (1): 4.1/5.6 MB
Progress (1): 4.1/5.6 MB
Progress (1): 4.1/5.6 MB
Progress (1): 4.1/5.6 MB
Progress (1): 4.1/5.6 MB
Progress (1): 4.1/5.6 MB
Progress (1): 4.2/5.6 MB
Progress (1): 4.2/5.6 MB
Progress (1): 4.2/5.6 MB
Progress (1): 4.2/5.6 MB
Progress (1): 4.2/5.6 MB
Progress (1): 4.2/5.6 MB
Progress (1): 4.3/5.6 MB
Progress (1): 4.3/5.6 MB
Progress (1): 4.3/5.6 MB
Progress (1): 4.3/5.6 MB
Progress (1): 4.3/5.6 MB
Progress (1): 4.3/5.6 MB
Progress (1): 4.4/5.6 MB
Progress (1): 4.4/5.6 MB
Progress (1): 4.4/5.6 MB
Progress (1): 4.4/5.6 MB
Progress (1): 4.4/5.6 MB
Progress (1): 4.4/5.6 MB
Progress (1): 4.5/5.6 MB
Progress (1): 4.5/5.6 MB
Progress (1): 4.5/5.6 MB
Progress (1): 4.5/5.6 MB
Progress (1): 4.5/5.6 MB
Progress (1): 4.5/5.6 MB
Progress (1): 4.6/5.6 MB
Progress (1): 4.6/5.6 MB
Progress (1): 4.6/5.6 MB
Progress (1): 4.6/5.6 MB
Progress (1): 4.6/5.6 MB
Progress (1): 4.6/5.6 MB
Progress (1): 4.6/5.6 MB
Progress (1): 4.7/5.6 MB
Progress (1): 4.7/5.6 MB
Progress (1): 4.7/5.6 MB
Progress (1): 4.7/5.6 MB
Progress (1): 4.7/5.6 MB
Progress (1): 4.7/5.6 MB
Progress (1): 4.8/5.6 MB
Progress (1): 4.8/5.6 MB
Progress (1): 4.8/5.6 MB
Progress (1): 4.8/5.6 MB
Progress (1): 4.8/5.6 MB
Progress (1): 4.8/5.6 MB
Progress (1): 4.9/5.6 MB
Progress (1): 4.9/5.6 MB
Progress (1): 4.9/5.6 MB
Progress (1): 4.9/5.6 MB
Progress (1): 4.9/5.6 MB
Progress (1): 4.9/5.6 MB
Progress (1): 4.9/5.6 MB
Progress (1): 5.0/5.6 MB
Progress (1): 5.0/5.6 MB
Progress (1): 5.0/5.6 MB
Progress (1): 5.0/5.6 MB
Progress (1): 5.0/5.6 MB
Progress (1): 5.0/5.6 MB
Progress (1): 5.1/5.6 MB
Progress (1): 5.1/5.6 MB
Progress (1): 5.1/5.6 MB
Progress (1): 5.1/5.6 MB
Progress (1): 5.1/5.6 MB
Progress (1): 5.1/5.6 MB
Progress (1): 5.2/5.6 MB
Progress (1): 5.2/5.6 MB
Progress (1): 5.2/5.6 MB
Progress (1): 5.2/5.6 MB
Progress (1): 5.2/5.6 MB
Progress (1): 5.2/5.6 MB
Progress (1): 5.3/5.6 MB
Progress (1): 5.3/5.6 MB
Progress (1): 5.3/5.6 MB
Progress (1): 5.3/5.6 MB
Progress (1): 5.3/5.6 MB
Progress (1): 5.3/5.6 MB
Progress (1): 5.4/5.6 MB
Progress (1): 5.4/5.6 MB

ACCESS to Apache Pony Mail

2023-10-30 Thread Arpit Goyal
Hi
Can anyone help me provide access to Apache Pony Mail. I tried login using
the jira credential but it didn't work.
Thanks and Regards
Arpit Goyal
8861094754