Re: [VOTE] KIP-877: Mechanism for plugins and connectors to register metrics

2024-06-10 Thread Mickael Maison
Hi,

Following the feedback in the DISCUSS thread, I made significant
changes to the proposal. So I'd like to restart a vote for KIP-877:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics

Thanks,
Mickael

On Thu, Jan 25, 2024 at 2:59 AM Tom Bentley  wrote:
>
> Hi Mickael,
>
> You'll have seen that I left some comments on the discussion thread, but
> they're minor enough that I'm happy to vote +1 here.
>
> Thanks,
>
> Tom
>
> On Thu, 11 Jan 2024 at 06:14, Mickael Maison 
> wrote:
>
> > Bumping this thread since I've not seen any feedback.
> >
> > Thanks,
> > Mickael
> >
> > On Tue, Dec 19, 2023 at 10:03 AM Mickael Maison
> >  wrote:
> > >
> > > Hi,
> > >
> > > I'd like to start a vote on KIP-877:
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics
> > >
> > > Let me know if you have any feedback.
> > >
> > > Thanks,
> > > Mickael
> >
> >


[jira] [Created] (KAFKA-16925) stream-table join does not immediately forward expired records on restart

2024-06-10 Thread Ayoub Omari (Jira)
Ayoub Omari created KAFKA-16925:
---

 Summary: stream-table join does not immediately forward expired 
records on restart
 Key: KAFKA-16925
 URL: https://issues.apache.org/jira/browse/KAFKA-16925
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Ayoub Omari
Assignee: Ayoub Omari


[KIP-923|https://cwiki.apache.org/confluence/display/KAFKA/KIP-923%3A+Add+A+Grace+Period+to+Stream+Table+Join]
 introduced grace period for KStreamKTableJoin. This allows to join a stream to 
a KTable backed by a Versioned state store.

Upon receiving a record, it is put in a buffer until grace period is elapsed. 
When the grace period elapses, the record is joined with its most recent match 
from the versioned state store.

+Late records+ are +not+ put in the buffer and are immediately joined.

 
{code:java}
If the grace period is non zero, the record will enter a stream buffer and will 
dequeue when the record timestamp is less than or equal to stream time minus 
the grace period.  Late records, out of the grace period, will be executed 
right as they come in. (KIP-923){code}
 

However, this is not the case today on rebalance or restart. The reason is that 
observedStreamTime is maintained within a variable which is lost on 
rebalance/restart: 
[https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamKTableJoinProcessor.java#L54]

 

If the task restarts and receives an expired record, it considers that it has 
the maximum stream time observed so far, and puts it in the buffer instead of 
immediately joining it.

 

{*}Example{*}:
 * Grace period = 60s
 * KTable contains (key, rightValue)

+Normal scenario+
{code:java}
streamInput1 (key, value1) <--- time = T : put in buffer

streamInput2 (key, value2) <--- time = T - 60s : immediately joined // 
streamTime = T{code}
+Scenario with rebalance+

 
{code:java}
streamInput1 (key, value1) <--- time = T : put in buffer

// --- rebalance ---

streamInput2 (key, value2) <--- time = T - 60s : put in buffer // streamTime = 
T - 60s{code}
 

The processor should use currentStreamTime from Context instead. Which is 
recovered on restart.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-877: Mechanism for plugins and connectors to register metrics

2024-06-10 Thread Chris Egerton
+1 (binding), thanks Mickael!

On Mon, Jun 10, 2024, 04:24 Mickael Maison  wrote:

> Hi,
>
> Following the feedback in the DISCUSS thread, I made significant
> changes to the proposal. So I'd like to restart a vote for KIP-877:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics
>
> Thanks,
> Mickael
>
> On Thu, Jan 25, 2024 at 2:59 AM Tom Bentley  wrote:
> >
> > Hi Mickael,
> >
> > You'll have seen that I left some comments on the discussion thread, but
> > they're minor enough that I'm happy to vote +1 here.
> >
> > Thanks,
> >
> > Tom
> >
> > On Thu, 11 Jan 2024 at 06:14, Mickael Maison 
> > wrote:
> >
> > > Bumping this thread since I've not seen any feedback.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Tue, Dec 19, 2023 at 10:03 AM Mickael Maison
> > >  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I'd like to start a vote on KIP-877:
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics
> > > >
> > > > Let me know if you have any feedback.
> > > >
> > > > Thanks,
> > > > Mickael
> > >
> > >
>


[jira] [Resolved] (KAFKA-14509) Add ConsumerGroupDescribe API

2024-06-10 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14509.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Add ConsumerGroupDescribe API
> -
>
> Key: KAFKA-14509
> URL: https://issues.apache.org/jira/browse/KAFKA-14509
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Max Riedel
>Priority: Major
>  Labels: kip-848-preview
> Fix For: 3.8.0
>
>
> The goal of this task is to implement the ConsumerGroupDescribe API as 
> described 
> [here|https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol#KIP848:TheNextGenerationoftheConsumerRebalanceProtocol-ConsumerGroupDescribeAPI];
>  and to implement the related changes in the admin client as described 
> [here|https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol#KIP848:TheNextGenerationoftheConsumerRebalanceProtocol-Admin#describeConsumerGroups].
> On the server side, this mainly requires the following steps:
>  # The request/response schemas must be defined (see 
> ListGroupsRequest/Response.json for an example);
>  # Request/response classes must be defined (see 
> ListGroupsRequest/Response.java for an example);
>  # The API must be defined in KafkaApis (see 
> KafkaApis#handleDescribeGroupsRequest for an example);
>  # The GroupCoordinator interface (java file) must be extended for the new 
> operations.
>  # The new operation must be implemented in GroupCoordinatorService (new 
> coordinator in Java) whereas the GroupCoordinatorAdapter (old coordinator in 
> Scala) should just reject the request.
> We could probably do 1) and 2) in one pull request and the remaining ones in 
> another.
> On the admin client side, this mainly requires the followings steps:
>  * Define all the new java classes as defined in the KIP.
>  * Add the new API to KafkaAdminClient class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16821) Create a new interface to store member metadata

2024-06-10 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16821.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Create a new interface to store member metadata
> ---
>
> Key: KAFKA-16821
> URL: https://issues.apache.org/jira/browse/KAFKA-16821
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: Screenshot 2024-05-14 at 11.03.10 AM.png
>
>
> !Screenshot 2024-05-14 at 11.03.10 AM.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2992

2024-06-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-1017: A health check endpoint for Kafka Connect

2024-06-10 Thread Chris Egerton
Hi all,

Thanks for the positive feedback!

I've made one small addition to the KIP since there's been a change to our
REST timeout error messages that's worth incorporating here. Quoting the
added section directly:

> Note that the HTTP status codes and "status" fields in the JSON response
will match the exact examples above. However, the "message" field may be
augmented to include, among other things, more information about the
operation(s) the worker could be blocked on (such as was added in REST
timeout error messages in KAFKA-15563 [1]).

Assuming this still looks okay to everyone, I'll kick off a vote thread
sometime this week or the next.

[1] - https://issues.apache.org/jira/browse/KAFKA-15563

Cheers,

Chris

On Fri, Jun 7, 2024 at 12:01 PM Andrew Schofield 
wrote:

> Hi Chris,
> This KIP looks good to me. I particularly like the explanation of how the
> result will specifically
> check the worker health in ways that have previously caused trouble.
>
> Thanks,
> Andrew
>
> > On 7 Jun 2024, at 16:18, Mickael Maison 
> wrote:
> >
> > Hi Chris,
> >
> > Happy Friday! The KIP looks good to me. +1
> >
> > Thanks,
> > Mickael
> >
> > On Fri, Jan 26, 2024 at 8:41 PM Chris Egerton 
> wrote:
> >>
> >> Hi all,
> >>
> >> Happy Friday! I'd like to kick off discussion for KIP-1017, which (as
> the
> >> title suggests) proposes adding a health check endpoint for Kafka
> Connect:
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1017%3A+Health+check+endpoint+for+Kafka+Connect
> >>
> >> This is one of the longest-standing issues with Kafka Connect and I'm
> >> hoping we can finally put it in the ground soon. Looking forward to
> hearing
> >> people's thoughts!
> >>
> >> Cheers,
> >>
> >> Chris
>
>


[jira] [Created] (KAFKA-16926) Optimize BeginQuorumEpoch heartbeat

2024-06-10 Thread Alyssa Huang (Jira)
Alyssa Huang created KAFKA-16926:


 Summary: Optimize BeginQuorumEpoch heartbeat
 Key: KAFKA-16926
 URL: https://issues.apache.org/jira/browse/KAFKA-16926
 Project: Kafka
  Issue Type: Sub-task
Reporter: Alyssa Huang


Instead of sending out BeginQuorum requests to every voter on a cadence, we can 
save on some requests by only sending to those which have not fetched within 
the fetch timeout.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16927) Adding tests for restarting followers receiving leader endpoint correctly

2024-06-10 Thread Alyssa Huang (Jira)
Alyssa Huang created KAFKA-16927:


 Summary: Adding tests for restarting followers receiving leader 
endpoint correctly
 Key: KAFKA-16927
 URL: https://issues.apache.org/jira/browse/KAFKA-16927
 Project: Kafka
  Issue Type: Sub-task
  Components: kraft
Reporter: Alyssa Huang


We'll need to test that restarting followers are populated with correct leader 
endpoint after receiving BeginQuorumEpochRequest. Depends on KAFKA-16536 and 
voter RPC changes to be done



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2993

2024-06-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16928) Test all of the request and response methods in RaftUtil

2024-06-10 Thread Jira
José Armando García Sancio created KAFKA-16928:
--

 Summary: Test all of the request and response methods in RaftUtil
 Key: KAFKA-16928
 URL: https://issues.apache.org/jira/browse/KAFKA-16928
 Project: Kafka
  Issue Type: Sub-task
Reporter: José Armando García Sancio


Add a RaftUtilTest test suite that checks that the request and response 
constructed by RaftUtil can be serialized to all of the support KRaft RPC 
versions.

The RPCs are:
 # Fetch
 # FetchSnapshot
 # Vote
 # BeginQuorumEpoch
 # EndQuorumEpoch

At the moment some of the RPCs are missing this should be worked after RaftUtil 
implements the creation of all KRaft RPCs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-9228) Reconfigured converters and clients may not be propagated to connector tasks

2024-06-10 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-9228.
--
Fix Version/s: 3.9.0
   Resolution: Fixed

> Reconfigured converters and clients may not be propagated to connector tasks
> 
>
> Key: KAFKA-9228
> URL: https://issues.apache.org/jira/browse/KAFKA-9228
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.3.2
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.9.0
>
>
> If an existing connector is reconfigured but the only changes are to its 
> converters and/or Kafka clients (enabled as of 
> [KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy]),
>  the changes will not propagate to its tasks unless the connector also 
> generates task configs that differ from the existing task configs. Even after 
> this point, if the connector tasks are reconfigured, they will still not pick 
> up on the new converter and/or Kafka client configs.
> This is because the {{DistributedHerder}} only writes new task configurations 
> to the connect config topic [if the connector-provided task configs differ 
> from the task configs already in the config 
> topic|https://github.com/apache/kafka/blob/e499c960e4f9cfc462f1a05a110d79ffa1c5b322/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1285-L1332],
>  and neither of those contain converter or Kafka client configs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14567) Kafka Streams crashes after ProducerFencedException

2024-06-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-14567.
-
  Assignee: (was: Kirk True)
Resolution: Fixed

Not 100% sure either, but I feel good enough to close this ticket for now. If 
we see it again, we can reopen or create a new ticket.

> Kafka Streams crashes after ProducerFencedException
> ---
>
> Key: KAFKA-14567
> URL: https://issues.apache.org/jira/browse/KAFKA-14567
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.7.0
>Reporter: Matthias J. Sax
>Priority: Blocker
>  Labels: eos
> Fix For: 3.8.0
>
>
> Running a Kafka Streams application with EOS-v2.
> We first see a `ProducerFencedException`. After the fencing, the fenced 
> thread crashed resulting in a non-recoverable error:
> {quote}[2022-12-22 13:49:13,423] ERROR [i-0c291188ec2ae17a0-StreamThread-3] 
> stream-thread [i-0c291188ec2ae17a0-StreamThread-3] Failed to process stream 
> task 1_2 due to the following error: 
> (org.apache.kafka.streams.processor.internals.TaskExecutor)
> org.apache.kafka.streams.errors.StreamsException: Exception caught in 
> process. taskId=1_2, processor=KSTREAM-SOURCE-05, 
> topic=node-name-repartition, partition=2, offset=539776276, 
> stacktrace=java.lang.IllegalStateException: TransactionalId 
> stream-soak-test-72b6e57c-c2f5-489d-ab9f-fdbb215d2c86-3: Invalid transition 
> attempted from state FATAL_ERROR to state ABORTABLE_ERROR
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:974)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.transitionToAbortableError(TransactionManager.java:394)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.maybeTransitionToErrorState(TransactionManager.java:620)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1079)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:959)
> at 
> org.apache.kafka.streams.processor.internals.StreamsProducer.send(StreamsProducer.java:257)
> at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:207)
> at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:162)
> at 
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:85)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
> at 
> org.apache.kafka.streams.kstream.internals.KStreamKTableJoinProcessor.process(KStreamKTableJoinProcessor.java:88)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
> at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:84)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.lambda$doProcess$1(StreamTask.java:791)
> at 
> org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:867)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.doProcess(StreamTask.java:791)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:722)
> at 
> org.apache.kafka.streams.processor.internals.TaskExecutor.processTask(TaskExecutor.java:95)
> at 
> org.apache.kafka.streams.processor.internals.TaskExecutor.process(TaskExecutor.java:76)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1645)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:788)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:607)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:569)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:748)
> at 
> o

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.8 #29

2024-06-10 Thread Apache Jenkins Server
See 




[VOTE] 3.7.1 RC1

2024-06-10 Thread Igor Soarez
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 3.7.1.

This is a bugfix release with several fixes.

Release notes for the 3.7.1 release:
https://home.apache.org/~soarez/kafka-3.7.1-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday June 18th, 11am UTC.

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~soarez/kafka-3.7.1-rc1/

* Docker release artifact to be voted upon:
apache/kafka:3.7.1-rc1

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~soarez/kafka-3.7.1-rc1/javadoc/

* Tag to be voted upon (off 3.7 branch) is the 3.7.1 tag:
https://github.com/apache/kafka/releases/tag/3.7.1-rc1

* Documentation:
https://kafka.apache.org/37/documentation.html

* Protocol:
https://kafka.apache.org/37/protocol.html

* Successful Jenkins builds for the 3.7 branch:
Unit/integration tests: 
https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.7/175/
The run shows some flaky tests, but I have confirmed they all pass locally.
One of them — 
kafka.api.PlaintextConsumerTest.testPerPartitionLagMetricsCleanUpWithSubscribe,
has an issue with the sequential run of the different test parameters,
but when run independently, all the variants pass.

System tests: I don't have access to the Confluent URL for system tests:
 https://jenkins.confluent.io/job/system-test-kafka/job/3.7
I am still working on finishing a full run using resources available to me.
If anyone with access is able to check them, please reply with details.

* Successful Docker Image Github Actions Pipeline for 3.7 branch:
Docker Build Test Pipeline: 
https://github.com/apache/kafka/actions/runs/9455339546

/**

Thanks,

--
Igor Soarez



Re: [DISCUSS] KIP-1035: StateStore managed changelog offsets

2024-06-10 Thread Matthias J. Sax

Thanks Nick.


201: This make sense. Would it make sense to actually document the hard 
requirement?


202: SG.

204: Ah yes. Thanks for clarifying.

205: SG.


I think you can start a VOTE.


-Matthias



On 6/6/24 2:53 AM, Nick Telford wrote:

Hi Matthias,

Thanks for your thorough review.

200. (#managesOffsets requirements)
Done

201. (#commit atomicity recommendation vs. guarantee)
There are possible StateStore implementations, including existing ones,
that can't guarantee atomicity - because the underlying database/system
doesn't support it. It might be hypothetically possible for atomicity to be
layered on top of these systems, but doing so would likely be complex and
potentially introduce performance issues. For this reason, I chose to make
this a recommendation over a requirement. The only hard requirement is that
writes are persisted to disk *before* offsets, so there's no data loss.

202. (Metrics descriptions)
I've actually updated the metric descriptions to match those of the
existing "flush" metrics. Is that ok?

203a. (Consumer Rebalance Metadata)
This is definitely implementation detail, so I've removed the "and close()
these stores" from the KIP, as well as the last paragraph about performance
testing and using separate threads, as they are also implementation details.
I'm actually working on this logic right now. Currently I have it just
initializing a StandbyTask for each found directory and then closing them
after we get the offsets. The reason we need to initialize an entire Task
is because StateStores are constructed by the ProcessorTopology when it's
initialized.
I'm going to spend some time today, looking into whether we can keep these
StandbyTasks open but not RUNNING, and then on-assignment either start
running them (if assigned a Standby) or upgrade it to a StreamTask (if
assigned an Active). I think it *should* be possible, but the devil is
usually in the details!

203b. (managesOffsets deprecation)
Done

204. (Downgrade)
In my testing, when an older version of RocksDBStore attempts to open a
database with an unknown column family (i.e the offsets cf), it throws an
Exception, which is caught and rethrown as TaskCorruptedException; this
triggers a wipe of local Task state and crashes the StreamThread. On
restart, it restores as normal.

205. (Segment stores implementation)
I'm deliberately not detailing this implementation in the KIP, because all
SegmentStore APIs are internal, so this is really just an implementation
detail.
What I'm (currently) doing is storing offsets in each Segment. Obviously
the currently live segment will be the only one with offsets being
advanced, so we always return offsets from the currently live segment, but
if there isn't one, then we go backwards through the existing offsets (i.e.
starting with the most recent) and return the first available offset.
I confess that SegmentStores are not an area I know much about, but I
believe this should work.

206. (KeyValueStoreTestDriver)
Hmm, good point. I thought this was part of the test-utils package along
with TopologyTestDriver. I'll keep it out of the KIP.

General status update:
I've begun implementing this KIP in a branch separate from my earlier work.
One of my primary goals is to implement it incrementally, in a way that
allows each commit to be independently reviewed and merged to trunk without
breaking anything. That should keep reviews much more concise. I'll start
opening PRs once the KIP has been accepted *and* I'm close enough to
completion that we can guarantee getting it all done by the next release.
--
Cheers,
Nick

On Tue, 4 Jun 2024 at 20:34, Matthias J. Sax  wrote:


Nick,

Thanks a lot for updating the KIP. I made a pass over it. Overall LGTM.
A few nits and some more minor questions:



200: nit (Javadocs for `StateStore.managesOffsets()`):


This is highly
recommended, if possible, to ensure that custom StateStores provide the

consistency guarantees that Kafka Streams

expects when operating under the {@code exactly-once} {@code

processing.mode}.

Given that we make it mandatory, we should rephrase this: "high
recommended" does not seems to be strong enough wording.



201: Javadocs for `StateStore.commit(final Map
changelogOffsets)`:


Implementations SHOULD ensure that {@code changelogOffsets} are

committed to disk atomically with the

records they represent, if possible.


Not sure if I can follow? Why "should ensure", but not "must ensure"?



202: New metrics:

`commit-rate` -> Description says "The number of calls to..." -- Should
be "The number of calls per second to..."?

`commit-latency-[]` -> Description says "The [] time taken to" -- Should
be "The [] time in nanoseconds taken to..."? (or milliseconds in case we
report in millis?)



203: Section "Consumer Rebalance Metadata"


We will then cache these offsets in-memory and close() these stores.


I think we should not pro-actively close the store, but keep them open,
until we get tasks assigned. For assigned tasks, we don

Re: [DISCUSS] KIP-1056: Remove `default.` prefix for exception handler StreamsConfig

2024-06-10 Thread Matthias J. Sax

Thanks for the KIP Murali,

Overall LGTM. A few comments.



100: Config names are part of the public interface, and the KIP should 
not say "none" in this section, but call out which configs are 
deprecated and which ones are newly added.



101: Nit. In "Propose Changes" there is the template placeholder text


Describe the new thing you want to do in appropriate detail. This may be fairly 
extensive and have large subsections of its own. Or it may be a few sentences. 
Use judgement based on the scope of the change.


Similarly in "Test Plan" section

Please remove both :)


102: The "Deprecation" section should explain the behavior if both, old 
and new configs, are set.



Thanks a lot!


-Matthias


On 6/9/24 9:30 PM, Muralidhar Basani wrote:

Hello all,

With this KIP
,
I would like to mention that a couple of exception handler configs in
StreamsConfig are defined as default configs, despite having no alternative
values. Hence would propose to deprecate them and introduce new configs
without the 'default.' prefix.

This KIP is briefly discussed here in the jira KAFKA-16853
 too.

I would appreciate any feedback or suggestions you might have.

Thanks,
Murali



Re: [DISCUSS] KIP-1050: Consistent error handling for Transactions

2024-06-10 Thread Matthias J. Sax
Thanks for this KIP. Great to see it. I would assume it will make 
KIP-691 unnecessary?


I don't think I fully understand the proposal yet. It's clear, that you 
propose to add new sub-classed to group existing exceptions. But it's 
not clear to me, which of the existing exceptions (which implement 
ApiException directly right now) will get a new parent class and go into 
the same group. You only list `InvalidProducerEpochException` which gets 
`AbortableTransactionException` as new parent. It would help a lot, if 
you could list out explicitly, which existing exceptions are grouped 
together via which sub-class.


It should be sufficient to just add a list for each group. For the newly 
added exception classes, I would also omit all constructors etc and just 
add a comment about it -- having constructors listed out does not add 
much value to the KIP itself but makes it harder to read (it's 
effectively noise we can avoid IMHO).




I am also wondering about compatibility? If I read the section 
correctly, you actually propose to introduce a non-backward-compatible 
change?



Based on type of exception thrown, user needs to change their exception 
catching logic to take actions against their exception handling.


Ie, an application cannot be upgrade w/o code changes? I am not sure if 
this is acceptable?


I think it would be much better (not sure if feasible) to keep the old 
behavior and let users opt-in / enable the new semantics via a config. 
If the new behavior is disabled, we could log a WARN that the app should 
upgrade to work with the new semantics, and we would only enforce the 
new behavior in a later major release.


Thoughts?



-Matthias






On 6/7/24 4:06 AM, Kaushik Raina wrote:

Thank you Andrew for feedback

1. We are suggesting to only update subclasses of
o.a.k.common.errors.ApiException, which are used in transactions. All such
subclasses are mentioned in Exception table


2. "Producer-Recoverable" corresponds to the AbortableException. I have
updated comments on each exception type.

3. Yes, it's correct that by adding a "Retriable" exception, it simplifies
the determination of which errors can be retried internally. In the Exception
table

mentioned
in the "Proposed Changes" section, the "Expected Handling" column signifies
the handling for each error type. Please let me know if any further
clarification is needed.

4a. Yes, that is correct. For clarity, only one constructor has been
mentioned in the KIP. An ellipsis has been added as a placeholder,
indicating that there are additional functions in the class but they are
not explicitly specified.
4b. Updated in the KIP.

5. TopicAuthorizationException extends "Invalid Configuration". "Invalid
Configuration" type can be resolved either by dynamically updating the
configuration, which does not require a restart, or by statically updating
it by restarting the application. It is at the application's discretion how
they want to handle each "Invalid Configuration" type.

I have added Client side handling example

in
KIP. Hope that helps.



Re: [DISCUSS] KIP-655: Add deduplication processor in kafka-streams

2024-06-10 Thread Matthias J. Sax

Thanks for the update Ayoub.


101: you say:


But I am not sure if we don't want to have them for this processor ?


What is your reasoning to move off the established pattern? Would be 
good to understand, why `Deduplicated` class needs a different 
"structure" compared to existing classes.




102: Creating iterators is very expensive. For other work, we actually 
hit 100x (?) throughput degradation by creating an (for most cases 
empty) iterator for every input record, and needed to find other ways to 
avoid creating an iterator per record. It really kills performance.


I see the point about data expiration. We could experiment with 
punctuation to expire old data, or add a second "time-ordered store" 
(which we already have at hand) which acts as an index into the main 
store. -- Another possibility would be to add a new version of segmented 
store with a different key-layout (ie, just store the plain key). I 
think with some refactoring, we might be able to re-use a lot of 
existing code.




104:


This gets me wondering if this is a limitation of stateful processors
in ALOS. For example a windowed aggregation with `on_window_close`
emit strategy may have the same limitation today (we receive a record,
we update its aggregation result in the store, then crash before committing,
then the record will be again reconsidered for aggregation). Is this
correct ?


Yes, this is correct, but it does not violate ALOS, because we did not 
lose the input record -- of course, the aggregation would contain the 
input record twice (eg, over count), but this is ok under ALOS. 
Unfortunately, for de-duplication this pattern breaks, because 
de-duplication operator does a different "aggregation logic" depending 
on its state (emit if no key found, but not emit if key found). For 
counting as an example, we increment the count and emit unconditionally 
though.




As a workaround, I think storing the record's offset inside the
store's value can tell us whether the record has been already seen or not.
If we receive a record whose deduplication id exists in the store
and the entry in the store has the same offset, it means the record
is processed twice and we can go ahead and forward it. If the offset
is different, it means it's a duplicate record, so we ignore it.


Great idea. This might work... If we store the input record offset, we 
can actually avoid that the "aggregation logic" changes for the same 
input record. -- And yes, with ALOS potentially emitting a duplicate is 
the-name-of-the-game, so no concerns on this part from my side.




105: picking the first offset with smallest ts sound good to me. The KIP 
should be explicit about it. But as discussed above, it might be 
simplest to not really have a window lookup, but just a plain key-lookup 
and drop if the key exists in the store? -- For late records, it might 
imply that they are not de-duplicated, but this is also the case for 
in-order records if they are further apart than the de-duplication 
window size, right? Thus I would believe this is "more natural" compared 
to discarding late records pro-actively, which would lead to missing 
result records?


We could also make this configurable if we are not sure what users 
really need -- or add such a configuration later in case the semantics 
we pick don't work for some users.


Another line of thinking, that did serve us well in the past: in doubt 
keep a record -- users can add operators to drop record (in case they 
don't want to keep it), but if we drop a record, users have no way to 
resurrect it (thus, building a workaround to change semantica is 
possible for users if we default to keep records, but not the other way 
around).


Would be good to get input from the broader community on this question 
thought. In the end, it must be a use-case driven decision?




-Matthias



On 6/6/24 5:02 AM, Ayoub Omari wrote:

Hi Matthias,

Thank you for your review !

100.
I agree. I changed the name of the parameter to "idSelector".
Because this id may be computed, It is better to call it "id" rather than
field or attribute.

101.
The reason I added the methods `keySerde()` and `valueSerde()` was to
have the same capabilities as other serde classes (such as Grouped
or Joined). As a Kafka-streams user, I usually use `with(keySerde,
valueSerde)`
as you suggested. But I am not sure if we don't want to have them for this
processor ?

102.
  That's a good point ! Because we know that the window store will contain
at most one instance of a given key, I am not sure how the range fetch
on WindowStore compares to a KeyValueStore `get()` in this case.
Wondering if the fact that the record's key is the prefix of the underlying
keyValueStore's key ("") may provide comparable performance
to the random access of KeyValueStore ? Of course, the WindowStore fetch()
would be less efficient because it may fetch from more than 1 segment, and
because of some iterator overhead.

The purpose of using a WindowStore is to automatical

Re: [DISCUSS] KIP-1049: Add config log.summary.interval.ms to Kafka Streams

2024-06-10 Thread Matthias J. Sax

Thanks for updating the KIP. Overall LGTM.

However, "rejected alternatives" still says:


add a new configuration log.summary.interval.ms to control the output interval 
of summary log


This seems not to be rejected, but what the KIP actually proposes and 
should be removed?



I think you can start a VOTE for this KIP.



-Matthias


On 6/5/24 3:18 AM, jiang dou wrote:

@Sophie
I'm sorry I didn't see this email.
I want add configuration in StreamConfig and define default value,I have
updated KIP


Sophie Blee-Goldman  于2024年5月29日周三 15:56写道:


Sure, as I said I'm supportive of this KIP. Just wanted to mention how the
issue could be mitigated in the meantime since the description made it
sound like you were suffering from excessive logs right now. Apologies if I
misinterpreted that.

I do think it would be nice to have a general setting for log intervals in
Streams. There are some other places where a regular summary log might be
nice. The config name you proposed is generic enough that we could reuse it
for other areas where we'd like to log summaries, so this seems like a good
config to introduce

My only question/request is that the KIP doesn't mention where this config
is being added. I assume from the context and Motivation section that
you're proposing to add this to StreamsConfig, which makes sense to me. But
please update the KIP to say this somewhere.

Otherwise the KIP LGTM. Anyone else have thoughts on this?

On Thu, May 23, 2024 at 12:19 AM jiang dou  wrote:


Thank you for your reply,
I do not recommend agreeing set log level is WARN, because INFO level

logs

should be useful


Sophie Blee-Goldman  于2024年5月23日周四 04:30写道:


Thanks for the KIP!

I'm not against adding this as a config for this per se, but if this is
causing you trouble right now you should be able to disable it via

log4j

configuration so you don't need to wait for a fix in Kafka Streams

itself.

Putting something like this in your log4j will shut off the offending

log:






log4j.logger.org.apache.kafka.streams.processor.internals.StreamThread=WARN


On Wed, May 22, 2024 at 6:46 AM jiang dou 

wrote:



Hi


I would like to propose a change in the kafka-stream summary log。

Now the summary of stream-tread is record every two minutes, and not
support close  or update intervals.

When the kafka  is running, this is absolutely unnecessary and even

harmful

since it fills the logs and thus storage space with unwanted and

useless

data.

I propose adding a configuration to control the output interval or

disable

it

KIP:







https://cwiki.apache.org/confluence/display/KAFKA/KIP-1049%3A+Add+config+log.summary.interval.ms+to+Kafka+Streams












Re: [DISCUSS] KIP-1027 Add MockFixedKeyProcessorContext

2024-06-10 Thread Matthias J. Sax

Shaswhat,

any updates on this KIP? -- I still think that recommending to use 
`InternalFixedKeyRecordFactory` is not the best way to write test code. 
Changing `FixedKeyRecord` constructor (as I mentioned in my last email) 
might not be a good solution either.


Maybe a cleaner way would be (so sidestep this problem), to add a new 
public "factory class" into the test package to generate 
FixedKeyRecords, and this factory could internally use 
`InternalFixedKeyRecordFactory`? It looks cleaner to me from an API POV, 
and if we change anything how `FixedKeyRecord` can be created, it would 
become a non-user-facing / internal change to the "factory" we provide.



-Matthias

On 5/22/24 12:02 AM, Matthias J. Sax wrote:
I was not aware of `InternalFixedKeyRecordFactory`. As the name 
indicates, it's considered an internal class, so not sure if we should 
recommend to use it in test...


I understand why this class is required, and why it was put into a 
public package; the way Java works, enforces this. Not sure if we could 
find a better solution.


Might be good to hear from others.


-Matthias

On 5/21/24 3:57 PM, Shashwat Pandey wrote:
Looking at the ticket and the sample code, I think it would be 
possible to

continue using `InternalFixedKeyRecordFactory` as the avenue to create
`FixedKeyRecord`s in tests. As long as there was a
MockFixedKeyProcessorContext, I think we would be able to test
FixedKeyProcessors without a Topology.

I created a sample repo with the `MockFixedKeyProcessorContext` here is
what I think the tests would look like:
https://github.com/s7pandey/kafka-processor-tests/blob/main/src/test/java/com/example/demo/MyFixedKeyProcessorTest.java



On Mon, May 20, 2024 at 9:05 PM Matthias J. Sax  wrote:


Had a discussion on https://issues.apache.org/jira/browse/KAFKA-15242
and it was pointed out, that we also need to do something about
`FixedKeyRecord`. It does not have a public constructor (what is
correct; it should not have one). However, this makes testing
`FixedKeyProcessor` impossible w/o extending `FixedKeyRecord` manually
what does not seem to be right (too clumsy).

It seems, we either need some helper builder method (but not clear to me
where to add it in an elegant way) which would provide us with a
`FixedKeyRecord`, or add some sub-class to the test-utils module which
would extend `FixedKeyRecord`? -- Or maybe an even better solution? I
could not think of something else so far.


Thoughts?


On 5/3/24 9:46 AM, Matthias J. Sax wrote:

Please also update the KIP.

To get a wiki account created, please request it via a commet on this
ticket: https://issues.apache.org/jira/browse/INFRA-25451

After you have the account, please share your wiki id, and we can give
you write permission on the wiki.



-Matthias

On 5/3/24 6:30 AM, Shashwat Pandey wrote:

Hi Matthias,

Sorry this fell out of my radar for a bit.

Revisiting the topic, I think you’re right and we accept the 
duplicated
nesting as an appropriate solution to not affect the larger public 
API.


I can update my PR with the change.

Regards,
Shashwat Pandey


On Wed, May 1, 2024 at 11:00 PM Matthias J. Sax 

wrote:



Any updates on this KIP?

On 3/28/24 4:11 AM, Matthias J. Sax wrote:

It seems that `MockRecordMetadata` is a private class, and thus not
part
of the public API. If there are any changes required, we don't 
need to

discuss on the KIP.


For `CapturedPunctuator` and `CapturedForward` it's a little bit 
more

tricky. My gut feeling is, that the classes might not need to be
changed, but if we use them within `MockProcessorContext` and
`MockFixedKeyProcessorContext` it might be weird to keep the current
nesting... The problem I see is, that it's not straightforward 
how to

move the classes w/o breaking compatibility, nor if we duplicate
them as
standalone classes w/o a larger "splash radius". (We would need 
to add

new overloads for MockProcessorContext#scheduledPunctuators() and
MockProcessorContext#forwarded()).

Might be good to hear from others if we think it's worth this larger
changes to get rid of the nesting, or just accept the somewhat not
ideal
nesting as it technically is not a real issue?


-Matthias


On 3/15/24 1:47 AM, Shashwat Pandey wrote:

Thanks for the feedback Matthias!

The reason I proposed the extension of MockProcessorContext was 
more

to do
with the internals of the class (MockRecordMetadata,
CapturedPunctuator and
CapturedForward).

However, I do see your point, I would then think to split
MockProcessorContext and MockFixedKeyProcessorContext, some of the
internal
classes should also be extracted i.e. MockRecordMetadata,
CapturedPunctuator and probably a new CapturedFixedKeyForward.

Let me know what you think!


Regards,
Shashwat Pandey


On Mon, Mar 11, 2024 at 10:09 PM Matthias J. Sax 
wrote:


Thanks for the KIP Shashwat. Closing this testing gap is great! It
did
come up a few time already...

One question: why do you propose to `extend MockProcessorContext`?

Given how the actual 

[jira] [Created] (KAFKA-16929) Conside defining kafka-specified assertion to unify testing style

2024-06-10 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16929:
--

 Summary: Conside defining kafka-specified assertion to unify 
testing style
 Key: KAFKA-16929
 URL: https://issues.apache.org/jira/browse/KAFKA-16929
 Project: Kafka
  Issue Type: New Feature
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


There are many contributors who trying to fix chaos of kafka testing. That 
includes following huge works:
 # replace powermock/easymock by mockito (KAFKA-7438)
 # replace junit 4 assertion by junit 5 (KAFKA-7339)

We take 6 years to complete the migration for task 1. The second task is in 
progress and I hope it can be addressed in 4.0.0

When reviewing I noticed there are many different tastes in code base. That is 
why the task 1 is such difficult to rewrite. Now, the rewriting of "assertion" 
is facing the same issue, and I feel the usage of "assertion" is even more 
awkward than "mockito" due to following reason.
 # there are two "different" assertion style in code base - hamcrest and junit 
- that is confused to developers 
([https://github.com/apache/kafka/pull/15730#discussion_r1567676845)]
 # third-party assertion does not offer good error message, so we need to use 
non-common style to get useful output 
([https://github.com/apache/kafka/pull/16253#discussion_r1633406693)]

IMHO, we should consider having our kafka-specified assertion style. Than can 
bring following benefit.
 # unify the assertion style of whole project
 # apply customized assertion. for example:
 ## assertEqual(List, List, F))
 ## assertTrue(Supplier, Duration) - equal to `TestUtils.waitForCondition`
 # auto-generate useful error message. For example: assertEqual(0, list) -> 
print the list

In short, I'd like to add a new module to define common assertions, and then 
apply it to code base slowly.

All feedback/responses/objections are welcomed :)

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16924) No log output when running kafka

2024-06-10 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-16924.
---
Resolution: Fixed

> No log output when running kafka 
> -
>
> Key: KAFKA-16924
> URL: https://issues.apache.org/jira/browse/KAFKA-16924
> Project: Kafka
>  Issue Type: Bug
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 4.0.0
>
>
> In [https://github.com/apache/kafka/pull/12148] , we removed log4jAppender 
> dependency, and add testImplementation dependency for `slf4jlog4j` lib. 
> However, we need this runtime dependency in tools module to output logs. 
> ([ref]([https://stackoverflow.com/a/21787813])) Adding this dependency back.
>  
> Note: The {{slf4jlog4j}} lib was added in {{log4j-appender}} dependency. 
> Since it's removed, we need to explicitly declare it.
>  
> Current output will be like this:
> {code:java}
> > ./gradlew clean jar
> > bin/kafka-server-start.sh config/kraft/controller.properties
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.7 #176

2024-06-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 461003 lines...]
[2024-06-11T01:18:32.255Z] 
[2024-06-11T01:18:32.255Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testPartitionReassignmentInHybridMode(ClusterInstance) > 
testPartitionReassignmentInHybridMode [1] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT PASSED
[2024-06-11T01:18:32.255Z] 
[2024-06-11T01:18:32.255Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testIncrementalAlterConfigsPreMigration(ClusterInstance) > 
testIncrementalAlterConfigsPreMigration [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-06-11T01:18:36.664Z] 
[2024-06-11T01:18:36.664Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testIncrementalAlterConfigsPreMigration(ClusterInstance) > 
testIncrementalAlterConfigsPreMigration [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-06-11T01:18:36.664Z] 
[2024-06-11T01:18:36.664Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
STARTED
[2024-06-11T01:18:46.410Z] 
[2024-06-11T01:18:46.410Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
PASSED
[2024-06-11T01:18:46.410Z] 
[2024-06-11T01:18:46.410Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testNewAndChangedTopicsInDualWrite(ClusterInstance) > 
testNewAndChangedTopicsInDualWrite [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-06-11T01:18:58.078Z] 
[2024-06-11T01:18:58.078Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testNewAndChangedTopicsInDualWrite(ClusterInstance) > 
testNewAndChangedTopicsInDualWrite [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-06-11T01:18:58.078Z] 
[2024-06-11T01:18:58.078Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWriteQuotaAndScram(ClusterInstance) > 
testDualWriteQuotaAndScram [1] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT STARTED
[2024-06-11T01:19:08.810Z] 
[2024-06-11T01:19:08.810Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWriteQuotaAndScram(ClusterInstance) > 
testDualWriteQuotaAndScram [1] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT PASSED
[2024-06-11T01:19:08.810Z] 
[2024-06-11T01:19:08.810Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-06-11T01:19:13.212Z] 
[2024-06-11T01:19:13.212Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-06-11T01:19:13.212Z] 
[2024-06-11T01:19:13.212Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-06-11T01:19:14.308Z] 
[2024-06-11T01:19:14.308Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-06-11T01:19:14.308Z] 
[2024-06-11T01:19:14.308Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> testStartZkBrokerWithAuthorizer [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-06-11T01:19:25.774Z] 
[2024-06-11T01:19:25.774Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> testStartZkBrokerWithAuthorizer [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-06-11T01:19:25.774Z] 
[2024-06-11T01:19:25.774Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-06-11T01:19:37.240Z] 
[2024-06-11T01:19:37.240Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-06-11T01:19:37.240Z] 
[2024-06-11T01:19:37.240Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2995

2024-06-10 Thread Apache Jenkins Server
See