Re: [DISCUSS] KIP-988 Streams Standby Task Update Listener

2023-10-23 Thread Sophie Blee-Goldman
Just want to checkpoint the current state of this KIP and make sure we're
on track to get it in to 3.7 (we still have a few weeks)  -- looks like
there are two remaining open questions, both relating to the
middle/intermediate callback:

1. What to name it: seems like the primary candidates are onBatchLoaded and
onBatchUpdated (and maybe also onStandbyUpdated?)
2. What additional information can we pass in that would strike a good
balance between being helpful and impacting performance.

Regarding #1, I think all of the current options are reasonable enough that
we should just let Colt decide which he prefers. I personally think
#onBatchUpdated is fine -- Bruno does make a fair point but the truth is
that English grammar can be sticky and while it could be argued that it is
the store which is updated, not the batch, I feel that it is perfectly
clear what is meant by "onBatchUpdated" and to me, this doesn't sound weird
at all. That's just my two cents in case it helps, but again, whatever
makes sense to you Colt is fine

When it comes to #2 -- as much as I would love to dig into the Consumer
client lore and see if we can modify existing APIs or add new ones in order
to get the desired offset metadata in an efficient way, I think we're
starting to go down a rabbit hole that is going to expand the scope way
beyond what Colt thought he was signing up for. I would advocate to focus
on just the basic feature for now and drop the end-offset from the
callback. Once we have a standby listener it will be easy to expand on with
a followup KIP if/when we find an efficient way to add additional useful
information. I think it will also become more clear what is and isn't
useful after more people get to using it in the real world

Colt/Eduwer: how necessary is receiving the end offset during a batch
update to your own application use case?

Also, for those who really do need to check the current end offset, I
believe in theory you should be able to use the KafkaStreams#metrics API to
get the current lag and/or end offset for the changelog -- it's possible
this does not represent the most up-to-date end offset (I'm not sure it
does or does not), but it should be close enough to be reliable and useful
for the purpose of monitoring -- I mean it is a metric, after all.

Hope this helps -- in the end, it's up to you (Colt) to decide what you
want to bring in scope or not. We still have more than 3 weeks until the
KIP freeze as currently proposed, so in theory you could even implement
this KIP without the end offset and then do a followup KIP to add the end
offset within the same release, ie without any deprecations. There are
plenty of paths forward here, so don't let us drag this out forever if you
know what you want

Cheers,
Sophie

On Fri, Oct 20, 2023 at 10:57 AM Matthias J. Sax  wrote:

> Forgot one thing:
>
> We could also pass `currentLag()` into `onBachLoaded()` instead of
> end-offset.
>
>
> -Matthias
>
> On 10/20/23 10:56 AM, Matthias J. Sax wrote:
> > Thanks for digging into this Bruno.
> >
> > The JavaDoc on the consumer does not say anything specific about
> > `endOffset` guarantees:
> >
> >> Get the end offsets for the given partitions. In the default {@code
> >> read_uncommitted} isolation level, the end
> >> offset is the high watermark (that is, the offset of the last
> >> successfully replicated message plus one). For
> >> {@code read_committed} consumers, the end offset is the last stable
> >> offset (LSO), which is the minimum of
> >> the high watermark and the smallest offset of any open transaction.
> >> Finally, if the partition has never been
> >> written to, the end offset is 0.
> >
> > Thus, I actually believe that it would be ok to change the
> > implementation and serve the answer from the `TopicPartitionState`?
> >
> > Another idea would be, to use `currentLag()` in combination with
> > `position()` (or the offset of the last read record) to compute the
> > end-offset of the fly?
> >
> >
> > -Matthias
> >
> > On 10/20/23 4:00 AM, Bruno Cadonna wrote:
> >> Hi,
> >>
> >> Matthias is correct that the end offsets are stored somewhere in the
> >> metadata of the consumer. More precisely, they are stored in the
> >> `TopicPartitionState`. However, I could not find public API on the
> >> consumer other than currentLag() that uses the stored end offsets. If
> >> I understand the code correctly, method endOffSets() always triggers a
> >> remote call.
> >>
> >> I am a bit concerned about doing remote calls every commit.interval.ms
> >> (by default 200ms under EOS). At the moment the remote calls are only
> >> issued if an optimization for KTables is turned on where changelog
> >> topics are replaced with the input topic of the KTable. The current
> >> remote calls retrieve all committed offsets of the group at once. If I
> >> understand correctly, that is one single remote call. Remote calls for
> >> getting end offsets of changelog topics -- as I understand you are
> >> planning to issue -- will probably 

Re: Apache Kafka 3.7.0 Release

2023-10-23 Thread Sophie Blee-Goldman
Actually I have a few questions about the schedule:

1. Why is the KIP freeze deadline on a Saturday? Traditionally this has
been on a Wednesday, which is nice because it gives people until Monday to
kick off the vote and give people a full 3 working days to review and vote
on it. Also,
2. Why are the subsequent deadlines on different days of the week? Usually
we aim to have the freeze deadlines separated by an integer number of
weeks. Besides just being a consequence of the typical 1/2 week separation
between freeze dates, this makes it easy for everyone to remember when the
next deadline is so they can make sure to get everything in on time. I
worry that varying this will catch people off guard.
3. Is there a particular reason for having the feature freeze almost a full
3 weeks from the KIP freeze? I understand moving the KIP freeze deadline up
to account for recent release delays, but aren't we wasting some of that
gained time by having 3 weeks between the KIP and feature freeze (which are
usually separated by just a single week)?
4. On the other hand, we usually have a full two weeks from the feature
freeze deadline to the code freeze but with the given schedule there would
only be a week and a half. Given how important this period is for testing
and stabilizing the release, and how vital this is for uncovering blockers
that would have delayed the release deadline, I really think we should
maintain the two-week gap (at a minimum)

Note that historically, we have set all the deadlines on a Wednesday and
when in doubt erred on the side of an earlier deadline, to encourage folks
to get their work completed and stabilized as soon as possible. We can, and
often have, allowed things to come in late between the Wednesday freeze
deadline and the following Friday, but only on a case-by-case basis. This
way the RM has the flexibility to determine what to allow and when, if need
be, while still having everyone aim for the established deadlines.

Just to throw a suggestion out there, if we want to avoid running into the
winter holidays while still making up for slipping of recent releases, what
about something like this:

KIP Freeze: Nov 22nd
Feature Freeze: Nov 29th
Code Freeze: Dec 13th

We can keep the release target as Jan 3rd or move it up to Dec 27th.
Personally, I would just aim to have it as Dec 27th but keep the stated
target as Jan 3rd, to account for unexpected blockers/delays and time away
during the winter holidays

Thoughts?

On Mon, Oct 23, 2023 at 3:14 PM Sophie Blee-Goldman 
wrote:

> Can you add the 3.7 plan to the release schedule page?
>
> (this -->
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan)
>
> Thanks!
>
> On Sun, Oct 15, 2023 at 2:27 AM Stanislav Kozlovski
>  wrote:
>
>> Hey Chris,
>>
>> Thanks for the catch! It was indeed copied and I wasn't sure what to make
>> of the bullet point, so I kept it. What you say makes sense - I removed
>> it.
>>
>> I also added KIP-976!
>>
>> Cheers!
>>
>> On Sat, Oct 14, 2023 at 9:35 PM Chris Egerton 
>> wrote:
>>
>> > Hi Stanislav,
>> >
>> > Thanks for putting this together! I think the "Ensure that release
>> > candidates include artifacts for the new Connect test-plugins module"
>> > section (which I'm guessing was copied over from the 3.6.0 release
>> plan?)
>> > can be removed; we made sure that those artifacts were present for
>> 3.6.0,
>> > and I don't anticipate any changes that would make them likelier to be
>> > accidentally dropped in subsequent releases than any other Maven
>> artifacts
>> > that we publish.
>> >
>> > Also, can we add KIP-976 (
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-976%3A+Cluster-wide+dynamic+log+adjustment+for+Kafka+Connect
>> > )
>> > to the release plan? The vote thread for it passed last week and I've
>> > published a complete PR (https://github.com/apache/kafka/pull/14538),
>> so
>> > it
>> > shouldn't be too difficult to get things merged in time for 3.7.0.
>> >
>> > Cheers,
>> >
>> > Chris
>> >
>> > On Sat, Oct 14, 2023 at 3:26 PM Stanislav Kozlovski
>> >  wrote:
>> >
>> > > Thanks for letting me drive it, folks.
>> > >
>> > > I've created the 3.7.0 release page here:
>> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.7.0
>> > > It outlines the key milestones and important dates for the release.
>> > >
>> > > In particular, since the last two releases slipped their originally
>> > > targeted release date by taking an average of 46 days after code
>> freeze
>> > (as
>> > > opposed to the minimum which is 14 days), I pulled the dates forward
>> to
>> > try
>> > > and catch up with the original release schedule.
>> > > You can refer to the last release during the Christmas holiday season
>> -
>> > > Apache
>> > > Kafka 3.4
>> > > 
>> -
>> > > to
>> > > see sample dates.
>> > >
>> > > The currently proposed dates are:
>> > >
>> > > *KIP Freeze - 18th November *(Saturday)
>> > > 

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.5 #87

2023-10-23 Thread Apache Jenkins Server
See 




Re: Apache Kafka 3.7.0 Release

2023-10-23 Thread Sophie Blee-Goldman
Can you add the 3.7 plan to the release schedule page?

(this -->
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan)

Thanks!

On Sun, Oct 15, 2023 at 2:27 AM Stanislav Kozlovski
 wrote:

> Hey Chris,
>
> Thanks for the catch! It was indeed copied and I wasn't sure what to make
> of the bullet point, so I kept it. What you say makes sense - I removed it.
>
> I also added KIP-976!
>
> Cheers!
>
> On Sat, Oct 14, 2023 at 9:35 PM Chris Egerton 
> wrote:
>
> > Hi Stanislav,
> >
> > Thanks for putting this together! I think the "Ensure that release
> > candidates include artifacts for the new Connect test-plugins module"
> > section (which I'm guessing was copied over from the 3.6.0 release plan?)
> > can be removed; we made sure that those artifacts were present for 3.6.0,
> > and I don't anticipate any changes that would make them likelier to be
> > accidentally dropped in subsequent releases than any other Maven
> artifacts
> > that we publish.
> >
> > Also, can we add KIP-976 (
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-976%3A+Cluster-wide+dynamic+log+adjustment+for+Kafka+Connect
> > )
> > to the release plan? The vote thread for it passed last week and I've
> > published a complete PR (https://github.com/apache/kafka/pull/14538), so
> > it
> > shouldn't be too difficult to get things merged in time for 3.7.0.
> >
> > Cheers,
> >
> > Chris
> >
> > On Sat, Oct 14, 2023 at 3:26 PM Stanislav Kozlovski
> >  wrote:
> >
> > > Thanks for letting me drive it, folks.
> > >
> > > I've created the 3.7.0 release page here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.7.0
> > > It outlines the key milestones and important dates for the release.
> > >
> > > In particular, since the last two releases slipped their originally
> > > targeted release date by taking an average of 46 days after code freeze
> > (as
> > > opposed to the minimum which is 14 days), I pulled the dates forward to
> > try
> > > and catch up with the original release schedule.
> > > You can refer to the last release during the Christmas holiday season -
> > > Apache
> > > Kafka 3.4
> > > 
> -
> > > to
> > > see sample dates.
> > >
> > > The currently proposed dates are:
> > >
> > > *KIP Freeze - 18th November *(Saturday)
> > > *This is 1 month and four days from now - rather short - but I'm afraid
> > is
> > > the only lever that's easy to pull forward.*
> > > As usual, a KIP must be accepted by this date in order to be considered
> > for
> > > this release. Note, any KIP that may not be implemented in a week, or
> > that
> > > might destabilize the release, should be deferred.
> > >
> > > *Feature Freeze - 8th December* (Friday)
> > > *This follows 3 weeks after the KIP Freeze, as has been the case in our
> > > latest releases.*
> > > By this point, we want all major features to be merged & us to be
> working
> > > on stabilisation. Minor features should have PRs, the release branch
> > should
> > > be cut; anything not in this state will be automatically moved to the
> > next
> > > release in JIRA
> > >
> > > *Code Freeze - 20th December* (Wednesday)
> > >
> > > *Critically, this is before the holiday season and ends in the middle
> of
> > > the week, to give contributors more time and flexibility to address any
> > > last-minute without eating into the time people usually take holidays.
> It
> > > comes 12 days after the Feature Freeze.This is two days shorter than
> the
> > > usual code freeze window. I don't have a strong opinion and am open to
> > > extend it to Friday, or trade off a day/two with the KF<->FF date
> range.*
> > >
> > > *Release -* *after January 3rd*.
> > > *It comes after a minimum of two weeks of stabilization, so the
> earliest
> > we
> > > can start releasing is January 3rd. We will move as fast as we can and
> > aim
> > > completing it as early in January as possible.*
> > >
> > > As for the initially-populated KIPs in the release plan, I did the
> > > following:
> > >
> > > I kept 4 KIPs that were mentioned in 3.6, saying they would have minor
> > > parts finished in 3.7 (as the major ones went out in 3.6)
> > > - KIP-405 Tiered Storage mentioned a major part went out with 3.6 and
> the
> > > remainder will come with 3.7
> > > - KIP-890 mentioned Part 1 shipped in 3.6. I am assuming the remainder
> > will
> > > come in 3.7, and have contacted the author to confirm.
> > > - KIP-926 was partially implemented in 3.6. I am assuming the remainder
> > > will come in 3.7, and have contacted the author to confirm.
> > > - KIP-938 mentioned that the majority was completed and a small
> remainder
> > > re: ForwardingManager metrics will come in 3.7. I have contacted the
> > author
> > > to confirm.
> > >
> > > I then went through the JIRA filter which looks at open issues with a
> Fix
> > > Version of 3.7 and added KIP-770, KIP-858, and KIP-980.
> > > I also found a fair amount of JIRAs that were 

[jira] [Created] (KAFKA-15675) Fix flaky ConnectorRestartApiIntegrationTest.testMultiWorkerRestartOnlyConnector() test

2023-10-23 Thread Kirk True (Jira)
Kirk True created KAFKA-15675:
-

 Summary: Fix flaky 
ConnectorRestartApiIntegrationTest.testMultiWorkerRestartOnlyConnector() test
 Key: KAFKA-15675
 URL: https://issues.apache.org/jira/browse/KAFKA-15675
 Project: Kafka
  Issue Type: Bug
Reporter: Kirk True
 Attachments: error.stacktrace.txt, error.stdout.txt

This integration test is flaky around 9% of test runs. Source: [Gradle 
Enterprise test 
trends|https://ge.apache.org/scans/tests?search.relativeStartTime=P28D=KAFKA=org.apache.kafka.connect.integration.ConnectorRestartApiIntegrationTest=testMultiWorkerRestartOnlyConnector].

One failure had this message:
{code:java}
java.lang.AssertionError: Failed to stop connector and tasks within 12ms 
{code}
Please see the attachments for the stack trace and stdout log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15674) Consider making RequestLocal thread safe

2023-10-23 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15674:
--

 Summary: Consider making RequestLocal thread safe
 Key: KAFKA-15674
 URL: https://issues.apache.org/jira/browse/KAFKA-15674
 Project: Kafka
  Issue Type: Improvement
Reporter: Justine Olshan


KAFKA-15653 found an issue with using the a request local on multiple threads. 
The RequestLocal object was originally designed in a non-thread-safe manner for 
performance.

It is passed around to methods that write to the log, and KAFKA-15653 showed 
that is it not too hard to accidentally share between different threads.

Given all this, and new changes and dependencies in the project compared to 
when it was first introduced, we may want to reconsider the thread safety of 
ThreadLocal.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15657) Unexpected errors when producing transactionally in 3.6

2023-10-23 Thread Travis Bischel (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Bischel resolved KAFKA-15657.

Resolution: Duplicate

Closing this as a different manifestation (and thus, duplicate of) KAFKA-15653

> Unexpected errors when producing transactionally in 3.6
> ---
>
> Key: KAFKA-15657
> URL: https://issues.apache.org/jira/browse/KAFKA-15657
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 3.6.0
>Reporter: Travis Bischel
>Priority: Major
>
> In loop-testing the franz-go client, I am frequently receiving INVALID_RECORD 
> (which I created a separate issue for), and INVALID_TXN_STATE and 
> UNKNOWN_SERVER_ERROR.
> INVALID_TXN_STATE is being returned even though the partitions have been 
> added to the transaction (AddPartitionsToTxn). Nothing about the code has 
> changed between 3.5 and 3.6, and I have loop-integration-tested this code 
> against 3.5 thousands of times. 3.6 is newly - and always - returning 
> INVALID_TXN_STATE. If I change the code to retry on INVALID_TXN_STATE, I 
> eventually quickly (always) receive UNKNOWN_SERVER_ERROR. In looking at the 
> broker logs, the broker indicates that sequence numbers are out of order - 
> but (a) I am repeating requests that were in order (so something on the 
> broker got a little haywire maybe? or maybe this is due to me ignoring 
> invalid_txn_state?), _and_ I am not receiving OUT_OF_ORDER_SEQUENCE_NUMBER, I 
> am receiving UNKNOWN_SERVER_ERROR.
> I think the main problem is the client unexpectedly receiving 
> INVALID_TXN_STATE, but a second problem here is that OOOSN is being mapped to 
> USE on return for some reason.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15656) Frequent INVALID_RECORD on Kafka 3.6

2023-10-23 Thread Travis Bischel (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Bischel resolved KAFKA-15656.

Resolution: Duplicate

Closing this as a different manifestation of (and thus, duplicate of) 
KAFKA-15653

> Frequent INVALID_RECORD on Kafka 3.6
> 
>
> Key: KAFKA-15656
> URL: https://issues.apache.org/jira/browse/KAFKA-15656
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 3.6.0
>Reporter: Travis Bischel
>Priority: Major
> Attachments: invalid_record.log
>
>
> Using this docker-compose.yml:
> {noformat}
> version: "3.7"
> services:
>   kafka:
>     image: bitnami/kafka:latest
>     network_mode: host
>     environment:
>       KAFKA_ENABLE_KRAFT: yes
>       KAFKA_CFG_PROCESS_ROLES: controller,broker
>       KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
>       KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
>       KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 
> CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
>       KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 1@127.0.0.1:9093
>       # Set this to "PLAINTEXT://127.0.0.1:9092" if you want to run this 
> container on localhost via Docker
>       KAFKA_CFG_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
>       KAFKA_CFG_NODE_ID: 1
>       ALLOW_PLAINTEXT_LISTENER: yes
>       KAFKA_KRAFT_CLUSTER_ID: XkpGZQ27R3eTl3OdTm2LYA # 16 byte base64-encoded 
> UUID{noformat}
> And running franz-go integration tests with KGO_TEST_RF=1, I consistently 
> receive INVALID_RECORD errors.
>  
> Looking at the container logs, I see these problematic log lines:
> {noformat}
> 2023-10-19 23:33:47,942] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition 
> 0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-0 
> (kafka.server.ReplicaManager) 
> org.apache.kafka.common.InvalidRecordException: Invalid negative header key 
> size -25
> [2023-10-19 23:33:47,942] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition 
> 0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-6 
> (kafka.server.ReplicaManager) 
> org.apache.kafka.common.InvalidRecordException: Reached end of input stream 
> before skipping all bytes. Remaining bytes:94
> [2023-10-19 23:33:47,942] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition 
> 0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-1 
> (kafka.server.ReplicaManager) 
> org.apache.kafka.common.InvalidRecordException: Found invalid number of 
> record headers -26
> [2023-10-19 23:33:47,948] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition 
> 0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-6 
> (kafka.server.ReplicaManager) 
> org.apache.kafka.common.InvalidRecordException: Found invalid number of 
> record headers -27
> [2023-10-19 23:33:47,950] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition 
> 0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-22 
> (kafka.server.ReplicaManager)
> org.apache.kafka.common.InvalidRecordException: Invalid negative header key 
> size -25
> [2023-10-19 23:33:47,947] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition 
> c63b6e30987317fad18815effb8d432b6df677d2ab56cf6da517bb93fa49b74b-25 
> (kafka.server.ReplicaManager)
> org.apache.kafka.common.InvalidRecordException: Found invalid number of 
> record headers -50
> [2023-10-19 23:33:47,959] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition 
> c63b6e30987317fad18815effb8d432b6df677d2ab56cf6da517bb93fa49b74b-25 
> (kafka.server.ReplicaManager) 
>  {noformat}
>  
> I modified franz-go with a diff to print the request that was written to the 
> wire once this error occurs. Attached is a v9 produce request. I deserialized 
> it locally and am not seeing the corrupt data that Kafka is printing. It's 
> possible there is a bug in the client, but again, these tests have never 
> received this error pre-Kafka 3.6. It _looks like_ there is either corruption 
> when processing the incoming data, or there is some problematic race 
> condition in the broker - I'm not sure which.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15673) Define new client_metrics resource type configuration to store metric config.

2023-10-23 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15673:
-

 Summary: Define new client_metrics resource type configuration to 
store metric config. 
 Key: KAFKA-15673
 URL: https://issues.apache.org/jira/browse/KAFKA-15673
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal
Assignee: Apoorv Mittal


The KIP-714 introduces new resource type named CLIENT_METRICS - 
[details|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability#KIP714:Clientmetricsandobservability-Clientmetricsconfiguration].]
 The CLIENT_METRICS resource type should be used against storing any dynamic 
client configurations through kafka-configs.sh utility.

 

The changes reuire:
 * Adding CLIENT_METRICS to resource type
 * Corresponding DYNAMIC client configurations in resources.
 * Changes to support dynamic loading of configuration on changes.
 * Changes to support API calls to fetch data stored against the new resource.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #100

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 308886 lines...]
> Task :connect:json:copyDependantLibs UP-TO-DATE
> Task :connect:json:jar UP-TO-DATE
> Task :connect:json:generateMetadataFileForMavenJavaPublication
> Task :connect:json:testClasses UP-TO-DATE
> Task :connect:json:testJar
> Task :storage:api:compileTestJava
> Task :storage:api:testClasses
> Task :connect:json:testSrcJar
> Task :connect:api:testJar
> Task :connect:json:publishMavenJavaPublicationToMavenLocal
> Task :connect:json:publishToMavenLocal
> Task :connect:api:testSrcJar
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :server-common:compileTestJava
> Task :server-common:testClasses
> Task :raft:compileTestJava
> Task :raft:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link:illegal character: "60" in "#define(String, Type, 
Importance, String, String, int, Width, String, List)"
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link:illegal character: "62" in "#define(String, Type, 
Importance, String, String, int, Width, String, List)"
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link: can't find define(String, Type, Importance, String, 
String, int, Width, String, List) in 
org.apache.kafka.common.config.ConfigDef

> Task :group-coordinator:compileTestJava
> Task :group-coordinator:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
5 warnings

> Task :metadata:compileTestJava
> Task :metadata:testClasses
> Task :clients:javadocJar
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.2.1/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD SUCCESSFUL in 7m 36s
94 actionable tasks: 41 executed, 53 up-to-date

Publishing build scan...
https://ge.apache.org/s/kny5s7ugd4c7q

[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.6.1-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #170

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 526270 lines...]

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetLogConfigs() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetLogConfigs() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testAclMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testAclMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testConditionalUpdatePath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testConditionalUpdatePath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testDeleteTopicZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testDeleteTopicZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testDeletePath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testDeletePath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetBrokerMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetBrokerMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testJuteMaxBufffer() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testJuteMaxBufffer() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[1] STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[1] PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[2] STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testChroot(boolean) > 
kafka.zk.KafkaZkClientTest.testChroot(boolean)[2] PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testRegisterBrokerInfo() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testRegisterBrokerInfo() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testConsumerOffsetPath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testConsumerOffsetPath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 176 > 
KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

Gradle Test Run 

Re: [VOTE] KIP-979 Allow independently stop KRaft processes

2023-10-23 Thread Hailey Ni
Hi Ron,

I've added the "Rejected Alternatives" section in the KIP. Thanks for the
comments and +1 vote!

Thanks,
Hailey

On Mon, Oct 23, 2023 at 6:33 AM Ron Dagostino  wrote:

> Hi Hailey.  I'm +1 (binding), but could you add a "Rejected
> Alternatives" section to the KIP and mention the "--required-config "
> option that we decided against and the reason why we made the decision
> to reject it?  There were some other small things (dash instead of dot
> in the parameter names, --node-id instead of --broker-id), but
> cosmetic things like this don't warrant a mention, so I think there's
> just the one thing to document.
>
> Thanks for the KIP, and thanks for adjusting it along the way as the
> discussion moved forward.
>
> Ron
>
>
> Ron
>
> On Mon, Oct 23, 2023 at 4:00 AM Federico Valeri 
> wrote:
> >
> > +1 (non binding)
> >
> > Thanks.
> >
> > On Mon, Oct 23, 2023 at 9:48 AM Kamal Chandraprakash
> >  wrote:
> > >
> > > +1 (non-binding). Thanks for the KIP!
> > >
> > > On Mon, Oct 23, 2023, 12:55 Hailey Ni 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to call a vote on KIP-979 that will allow users to
> independently
> > > > stop KRaft processes.
> > > >
> > > >
> > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+processes
> > > >
> > > > Thanks,
> > > > Hailey
> > > >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2322

2023-10-23 Thread Apache Jenkins Server
See 




Re: Development with Git Worktree

2023-10-23 Thread Ismael Juma
Greg,

Thanks for making the change and sharing the benefit with the overall group.

Ismael

On Mon, Oct 23, 2023 at 10:03 AM Greg Harris 
wrote:

> Hey Kafka Developers,
>
> This is a small announcement that the gradle build now supports the
> git-worktree subcommand [1] on the 3.4, 3.5, 3.6, and trunk branches
> [2].
>
> If you've needed to check out multiple copies of Kafka concurrently,
> you previously needed to manage multiple full clones of the
> repository. Now you can add a worktree that shares a common git
> repository, transparently sharing all commits and branches with a
> single local repository.
>
> With this, you can run tests in the background for one or more
> branches while actively working on a different branch. If you have
> open PRs right now, you can take advantage of this change by merging
> or rebasing with trunk.
>
> [1] https://git-scm.com/docs/git-worktree
> [2] https://issues.apache.org/jira/browse/KAFKA-14767
>
> Thanks,
> Greg
>


Re: [VOTE] KIP-967: Support custom SSL configuration for Kafka Connect RestServer

2023-10-23 Thread Greg Harris
Hey Taras,

Thanks for the KIP!

The design you propose follows the conventions started in KIP-519, and
should feel natural to operators familiar with the broker feature.
I also like that we're able to clean up some connect-specific
functionality and make the codebase more consistent.

+1 (binding)

Thanks,
Greg

On Fri, Oct 20, 2023 at 8:03 AM Taras Ledkov  wrote:
>
> Hi Kafka Team.
>
> II'd like to call a vote on KIP-967: Support custom SSL configuration for 
> Kafka Connect RestServer [1].
> Discussion thread [2] was started more then 2 month ago and there was not any 
> negative or critical comments.
>
> [1]. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-967%3A+Support+custom+SSL+configuration+for+Kafka+Connect+RestServer
> [2]. https://lists.apache.org/thread/w0vmbf1yzgjo7hkzyyzjjnb509x6s9qq
>
> --
> With best regards,
> Taras Ledkov


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #86

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 5179 lines...]

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/8.0.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 11m 48s
263 actionable tasks: 213 executed, 50 up-to-date

See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.5/build/reports/profile/profile-2023-10-23-16-47-59.html
A fine-grained performance profile is available: use the --scan option.
> Task :connect:transforms:checkstyleTest
> Task :connect:transforms:check
> Task :server-common:checkstyleTest
> Task :server-common:check
> Task :trogdor:checkstyleTest
> Task :trogdor:check
> Task :connect:api:checkstyleTest
> Task :connect:api:check
> Task :raft:compileTestJava
> Task :raft:testClasses
> Task :raft:spotbugsTest SKIPPED
> Task :group-coordinator:checkstyleTest
> Task :group-coordinator:check
> Task :streams:test-utils:checkstyleTest
> Task :streams:test-utils:check
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 11 and Scala 2.12
> Task :streams:testClasses
> Task :streams:streams-scala:compileTestJava NO-SOURCE
> Task :streams:spotbugsTest SKIPPED

> Task :streams:streams-scala:compileTestScala
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/kstream/KStreamSplitTest.scala:19:41:
 imported `Named` is permanently hidden by definition of type Named in package 
kstream
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/kstream/KStreamTest.scala:24:3:
 imported `Named` is permanently hidden by definition of type Named in package 
kstream
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/kstream/KTableTest.scala:21:3:
 imported `Named` is permanently hidden by definition of type Named in package 
kstream
three warnings found

> Task :streams:streams-scala:testClasses
> Task :streams:streams-scala:checkstyleTest NO-SOURCE
> Task :streams:streams-scala:spotbugsTest SKIPPED
> Task :streams:streams-scala:check
> Task :metadata:compileTestJava
> Task :metadata:testClasses
> Task :raft:checkstyleTest
> Task :raft:check
> Task :metadata:spotbugsTest SKIPPED
> Task :clients:checkstyleTest
> Task :clients:spotbugsMain
> Task :streams:streams-scala:classes
> Task :streams:streams-scala:checkstyleMain NO-SOURCE
> Task :streams:checkstyleTest
> Task :streams:check

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':streams:upgrade-system-tests-35:compileTestJava'.
> Could not resolve all files for configuration 
> ':streams:upgrade-system-tests-35:testCompileClasspath'.
   > Could not find org.apache.kafka:kafka-streams:null.
 Searched in the following locations:
   - 
https://repo.maven.apache.org/maven2/org/apache/kafka/kafka-streams/null/kafka-streams-null.pom
 If the artifact you are trying to retrieve can be found in the repository 
but without metadata in 'Maven POM' format, you need to adjust the 
'metadataSources { ... }' of the repository declaration.
 Required by:
 project :streams:upgrade-system-tests-35

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/8.0.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 12m 25s
260 actionable tasks: 209 executed, 51 up-to-date

See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_3.5/build/reports/profile/profile-2023-10-23-16-48-22.html
A fine-grained performance profile is available: use the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 8 and Scala 2.12
> Task :metadata:checkstyleTest
> Task :metadata:check
> Task :streams:streams-scala:spotbugsMain

> Task :core:compileScala
Unexpected javac output: warning: [options] bootstrap class path not set in 
conjunction with -source 8

Development with Git Worktree

2023-10-23 Thread Greg Harris
Hey Kafka Developers,

This is a small announcement that the gradle build now supports the
git-worktree subcommand [1] on the 3.4, 3.5, 3.6, and trunk branches
[2].

If you've needed to check out multiple copies of Kafka concurrently,
you previously needed to manage multiple full clones of the
repository. Now you can add a worktree that shares a common git
repository, transparently sharing all commits and branches with a
single local repository.

With this, you can run tests in the background for one or more
branches while actively working on a different branch. If you have
open PRs right now, you can take advantage of this change by merging
or rebasing with trunk.

[1] https://git-scm.com/docs/git-worktree
[2] https://issues.apache.org/jira/browse/KAFKA-14767

Thanks,
Greg


Re: UncleanLeaderElectionsPerSec metric and Raft

2023-10-23 Thread Justine Olshan
Hey Neil,

I was taking a look at this code, and noticed that some unclean leader
election params were not implemented.
https://github.com/apache/kafka/blob/4612fe42af0df0a4c1affaf66c55d01eb6267ce3/metadata/src/main/java/org/apache/kafka/controller/ConfigurationControlManager.java#L499

I know you mentioned setting the non-topic config, but I wonder if the
feature is generally not built out. I think that once KIP-966 is
implemented, it will likely replace the old notion of unclean leader
election.

Still, if KRaft mode doesn't have unclean leader election, it should be
documented. I will get back to you on this.

Justine

On Wed, Oct 18, 2023 at 10:30 AM Neil Buesing  wrote:

> Development,
>
> with Raft controllers, is the unclean leader election / sec metric supose
> to be available?
>
> kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec
>
> Nothing in documentation indicates that it isn’t as well as in code
> navigation nothing indicates to me that it wouldn’t show up, but even added
> unclean leader election to true for both brokers and controllers and
> nothing.
>
> (set this for all controllers and brokers)
>   KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE: true
>
> Happy to report a Jira, but wanted to figure out if the bug was in the
> documentation or the metric not being available?
>
> Thanks,
>
> Neil
>
> P.S. I did confirm that others have seen and wondered about this,
> https://github.com/strimzi/strimzi-kafka-operator/issues/8169, but that is
> about the only other report on this I have found.
>


Re: [DISCUSS] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-10-23 Thread Bill Bejeck
Hey Hanyu,

Thanks for the KIP, it's a welcomed addition.
Overall, the KIP looks good to me, I just have one comment.

Can you discuss the expected behavior when a user executes a timestamped
query against a non-timestamped store?  I think it should throw an
exception vs. using some default value.
If it's the case that Kafka Stream wraps all stores in a
`TimestampAndValue` store and returning a plain `V` or a
`TimestampAndValue` object depends on the query type, then it would be
good to add those details to the KIP.

Thanks,
Bill



On Fri, Oct 20, 2023 at 5:07 PM Hanyu (Peter) Zheng
 wrote:

> Thank you Matthias,
>
> I will modify the KIP to eliminate this restriction.
>
> Sincerely,
> Hanyu
>
> On Fri, Oct 20, 2023 at 2:04 PM Hanyu (Peter) Zheng 
> wrote:
>
> > Thank you Alieh,
> >
> > In these two new query types, I will remove 'get' from all getter method
> > names.
> >
> > Sincerely,
> > Hanyu
> >
> > On Fri, Oct 20, 2023 at 10:40 AM Matthias J. Sax 
> wrote:
> >
> >> Thanks for the KIP Hanyu,
> >>
> >> One questions:
> >>
> >> > To address this inconsistency, we propose that KeyQuery  should be
> >> restricted to querying kv-stores  only, ensuring that it always returns
> a
> >> plain V  type, making the behavior of the aforementioned code more
> >> predictable. Similarly, RangeQuery  should be dedicated to querying
> >> kv-stores , consistently returning only the plain V .
> >>
> >> Why do you want to restrict `KeyQuery` and `RangeQuery` to kv-stores? I
> >> think it would be possible to still allow both queries for ts-kv-stores,
> >> but change the implementation to return "plain V" instead of
> >> `ValueAndTimestamp`, ie, the implementation would automatically
> >> unwrap the value.
> >>
> >>
> >>
> >> -Matthias
> >>
> >> On 10/20/23 2:32 AM, Alieh Saeedi wrote:
> >> > Hey Hanyu,
> >> >
> >> > Thanks for the KIP. It seems good to me.
> >> > Just one point: AFAIK, we are going to remove "get" from the name of
> all
> >> > getter methods.
> >> >
> >> > Cheers,
> >> > Alieh
> >> >
> >> > On Thu, Oct 19, 2023 at 5:44 PM Hanyu (Peter) Zheng
> >> >  wrote:
> >> >
> >> >> Hello everyone,
> >> >>
> >> >> I would like to start the discussion for KIP-992: Proposal to
> introduce
> >> >> IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery
> >> >>
> >> >> The KIP can be found here:
> >> >>
> >> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery
> >> >>
> >> >> Any suggestions are more than welcome.
> >> >>
> >> >> Many thanks,
> >> >> Hanyu
> >> >>
> >> >> On Thu, Oct 19, 2023 at 8:17 AM Hanyu (Peter) Zheng <
> >> pzh...@confluent.io>
> >> >> wrote:
> >> >>
> >> >>>
> >> >>>
> >> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery
> >> >>>
> >> >>> --
> >> >>>
> >> >>> [image: Confluent] 
> >> >>> Hanyu (Peter) Zheng he/him/his
> >> >>> Software Engineer Intern
> >> >>> +1 (213) 431-7193 <+1+(213)+431-7193>
> >> >>> Follow us: [image: Blog]
> >> >>> <
> >> >>
> >>
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> >> >>> [image:
> >> >>> Twitter] [image: LinkedIn]
> >> >>> [image: Slack]
> >> >>> [image: YouTube]
> >> >>> 
> >> >>>
> >> >>> [image: Try Confluent Cloud for Free]
> >> >>> <
> >> >>
> >>
> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound_source=gmail_medium=organic
> >> >>>
> >> >>>
> >> >>
> >> >>
> >> >> --
> >> >>
> >> >> [image: Confluent] 
> >> >> Hanyu (Peter) Zheng he/him/his
> >> >> Software Engineer Intern
> >> >> +1 (213) 431-7193 <+1+(213)+431-7193>
> >> >> Follow us: [image: Blog]
> >> >> <
> >> >>
> >>
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> >> >>> [image:
> >> >> Twitter] [image: LinkedIn]
> >> >> [image: Slack]
> >> >> [image: YouTube]
> >> >> 
> >> >>
> >> >> [image: Try Confluent Cloud for Free]
> >> >> <
> >> >>
> >>
> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound_source=gmail_medium=organic
> >> >>>
> >> >>
> >> >
> >>
> >
> >
> > --
> >
> > [image: Confluent] 
> > Hanyu (Peter) Zheng he/him/his
> > Software Engineer Intern
> > +1 (213) 431-7193 <+1+(213)+431-7193>
> > Follow us: [image: Blog]
> > <
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> >[image:
> > Twitter] 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #99

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 307422 lines...]
> Task :connect:json:jar UP-TO-DATE
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:json:generateMetadataFileForMavenJavaPublication
> Task :connect:json:testSrcJar
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :clients:generateMetadataFileForMavenJavaPublication
> Task :connect:json:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :connect:json:publishToMavenLocal
> Task :storage:api:compileTestJava
> Task :storage:api:testClasses
> Task :server-common:compileTestJava
> Task :server-common:testClasses
> Task :raft:compileTestJava
> Task :raft:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link:illegal character: "60" in "#define(String, Type, 
Importance, String, String, int, Width, String, List)"
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link:illegal character: "62" in "#define(String, Type, 
Importance, String, String, int, Width, String, List)"
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link: can't find define(String, Type, Importance, String, 
String, int, Width, String, List) in 
org.apache.kafka.common.config.ConfigDef

> Task :group-coordinator:compileTestJava
> Task :group-coordinator:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
5 warnings

> Task :metadata:compileTestJava
> Task :metadata:testClasses
> Task :clients:javadocJar
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.2.1/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD SUCCESSFUL in 9m
94 actionable tasks: 41 executed, 53 up-to-date

Publishing build scan...
https://ge.apache.org/s/mg3l56rzm5ooo

[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.6.1-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install 

Re: [DISCUSS] Road to Kafka 4.0

2023-10-23 Thread Mickael Maison
Hi Luke,

Thanks for starting this discussion. I think it's very important that
we communicate our plans and progress to send a clear message to
users.

Regarding KRaft missing features, I tend to agree with Christopher
that it would be much better to get them merged and declared
production ready before 4.0. Otherwise users relying on them will not
be able to upgrade to 4.0 and will have to wait. In my opinion, this
would send a pretty negative message to these users.

In terms of other changes planned for 4.0, what about the Log4j2 KIPs?
We voted KIP-653 and KIP-719 and decided to wait for 4.0 to make the
move.

Finally we probably want a clear statement on the support plan for the
3.X series once 4.0 is out.

Thanks,
Mickael




On Wed, Oct 11, 2023 at 6:45 PM Christopher Shannon
 wrote:
>
> I think JBOD definitely needs to be before 4.0. That has been a blocker
> issue this entire time for me and my team and I'm sure others. While Kraft
> has been technically "production ready" for a while, I haven't been able to
> upgrade because of missing JBOD support.
>
> On Wed, Oct 11, 2023 at 12:15 PM Ismael Juma  wrote:
>
> > Hi Luke,
> >
> > This is a good discussion. And there is a lot more to it than KRaft.
> >
> > With regards to KRaft, there are two separate items:
> > 1. Bugs
> > 2. Missing features when compared to ZK
> >
> > When it comes to bugs, I don't see why 4.0 is particularly relevant. KRaft
> > has been considered production-ready for over a year. If the bug is truly
> > critical, we should fix it for 3.6.1 or 3.7.0 (depending on the
> > complexity).
> >
> > When it comes to missing features, it would be preferable to land them
> > before 4.0 as well (ideally 3.7). I believe KIP-858 (JBOD) is the obvious
> > one in this category, but there are a few more in your list worth
> > discussing.
> >
> > Ismael
> >
> > On Wed, Oct 11, 2023 at 5:18 AM Luke Chen  wrote:
> >
> > > Hi all,
> > >
> > > While Kafka 3.6.0 is released, I’d like to start the discussion for the
> > > “road to Kafka 4.0”. Based on the plan in KIP-833
> > > <
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-833%3A+Mark+KRaft+as+Production+Ready#KIP833:MarkKRaftasProductionReady-Kafka3.7
> > > >,
> > > the next release 3.7 will be the final release before moving to Kafka 4.0
> > > to remove the Zookeeper from Kafka. Before making this major change, I'd
> > > like to get consensus on the "must-have features/fixes for Kafka 4.0", to
> > > avoid some users being surprised when upgrading to Kafka 4.0. The intent
> > is
> > > to have a clear communication about what to expect in the following
> > months.
> > > In particular we should be signaling what features and configurations are
> > > not supported, or at risk (if no one is able to add support or fix known
> > > bugs).
> > >
> > > Here is the JIRA tickets list
> > > 
> > I
> > > labeled for "4.0-blocker". The criteria I labeled as “4.0-blocker” are:
> > > 1. The feature is supported in Zookeeper Mode, but not supported in KRaft
> > > mode, yet (ex: KIP-858: JBOD in KRaft)
> > > 2. Critical bugs in KRaft, (ex: KAFKA-15489 : split brain in KRaft
> > > controller quorum)
> > >
> > > If you disagree with my current list, welcome to have discussion in the
> > > specific JIRA ticket. Or, if you think there are some tickets I missed,
> > > welcome to start a discussion in the JIRA ticket and ping me or other
> > > people. After we get the consensus, we can label/unlabel it afterwards.
> > > Again, the goal is to have an open communication with the community about
> > > what will be coming in 4.0.
> > >
> > > Below is the high level category of the list content:
> > >
> > > 1. Recovery from disk failure
> > > KIP-856
> > > <
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-856:+KRaft+Disk+Failure+Recovery
> > > >:
> > > KRaft Disk Failure Recovery
> > >
> > > 2. Prevote to support controllers more than 3
> > > KIP-650
> > > <
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-650%3A+Enhance+Kafkaesque+Raft+semantics
> > > >:
> > > Enhance Kafkaesque Raft semantics
> > >
> > > 3. JBOD support
> > > KIP-858
> > > <
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft
> > > >:
> > > Handle
> > > JBOD broker disk failure in KRaft
> > >
> > > 4. Scale up/down Controllers
> > > KIP-853
> > > <
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes
> > > >:
> > > KRaft Controller Membership Changes
> > >
> > > 5. Modifying dynamic configurations on the KRaft controller
> > >
> > > 6. Critical bugs in KRaft
> > >
> > > Does this make sense?
> > > Any feedback is welcomed.
> > >
> > > Thank you.
> > > Luke
> > >
> >


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2321

2023-10-23 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-15666) Добавить функцию поиска по почте

2023-10-23 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15666.
-
Resolution: Invalid

[~noraverba] – We can only take tickets in English. – Also piped the title 
through a translater an it did not really make sense to me, especially as the 
description is empty.

Close as invalid.

> Добавить функцию поиска по почте
> 
>
> Key: KAFKA-15666
> URL: https://issues.apache.org/jira/browse/KAFKA-15666
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Eleonora
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15672) Add 3.6 to streams system tests

2023-10-23 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-15672:
---

 Summary: Add 3.6 to streams system tests
 Key: KAFKA-15672
 URL: https://issues.apache.org/jira/browse/KAFKA-15672
 Project: Kafka
  Issue Type: Test
  Components: streams, system tests
Reporter: Matthias J. Sax


3.6.0 was released recently. We need to add `3.6.0` to the system tests (in 
particular upgrade and broker compatibility tests)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Re: [DISCUSS] KIP-905: Broker interceptors

2023-10-23 Thread Andrew Otto
FWIW, this would be very useful for the Wikimedia Foundation's Event
Platform.  We have some requirements

for our event stream producers, and not having to re-implement this logic
in multiple programming languages and frameworks would be really nice.

I had doubts about making brokers more complex as well, but
> One benefit of pluggable interceptors is that they don't affect users who
don't need and don't use them, so the Kafka robustness remains at the
baseline

were my thoughts too.  This is an opt-in feature.

It would be nice if there was configuration to includelist or
excludelist certain topics from passing through the interceptor logic.  I
suppose the custom interceptor implementation could just pass if the topic
shouldn't be intercepted.  But I think I'd prefer if custom code execution
could be avoided for certain topics, just in case there is a bug deployed
in the custom interceptor.





On Fri, Oct 20, 2023 at 11:19 AM Ivan Yurchenko  wrote:

> Hi David and Ahmed,
>
> First, thank you David for the KIP. It would be very valuable for multiple
> use cases. Products like Conduktor Gateway [1] validate the demand and
> offer many potential use cases [2].
>
> Now, I understand Ahmed's concerns about possible in-band interruptions,
> the are valid. However, certain use cases cannot be handled without
> intercepting the request flow to Kafka brokers (for example, the
> broker-side schema validation.) A number of open source and proprietary
> proxy solutions exist and they have their user base, for which the benefits
> outweigh the risks. In the current state, the broker itself already has
> injection points for custom code executed in the hot path of message
> handling, namely the Authorizer.
>
> One benefit of pluggable interceptors is that they don't affect users who
> don't need and don't use them, so the Kafka robustness remains at the
> baseline. Those who need this functionality, can make their conscious
> decision. So to me it seems this will be positive to Kafka community and
> ecosystem.
>
> Best regards,
> Ivan
>
> [1] https://docs.conduktor.io/gateway/
> [2] https://marketplace.conduktor.io/
>
> On 2023/02/10 16:41:01 David Mariassy wrote:
> > Hi Ahmed,
> >
> > Thanks for taking a look at the KIP, and for your insightful feedback!
> >
> > I don't disagree with the sentiment that in-band interceptors could be a
> > potential source of bugs in a cluster.
> >
> > Having said that, I don't necessarily think that an in-band interceptor
> is
> > significantly riskier than an out-of-band pre-processor. Let's take the
> > example of platform-wide privacy scrubbing. In my opinion it doesn't
> really
> > matter if this feature is deployed as an out-of-band stream processor app
> > that consumes from all topics OR if the logic is implemented as an in-ban
> > interceptor. Either way, a faulty release of the scrubber will result in
> > the platform-wide disruption of data flows. Thus, I'd argue that from the
> > perspective of the platform's overall health, the level of risk is very
> > comparable in both cases. However in-band interceptors have a couple of
> > advantages in my opinion:
> > 1. They are significantly cheaper (don't require duplicating data between
> > raw and sanitized topics. There are also a lot of potential savings in
> > network costs)
> > 2. They are easier to maintain (no need to set up additional
> infrastructure
> > for out-of-band processing)
> > 3. They can provide accurate produce responses to clients (since there is
> > no downstream processing that could render a client's messages invalid
> > async)
> >
> > Also, in-band interceptors could be as safe or risky as their authors
> > design them to be. There's nothing stopping someone from catching all
> > exceptions in a `processRecord` method, and letting all unprocessed
> > messages go through or sending them to a DLQ. Once the interceptor is
> > fixed, those unprocessed messages could get re-ingested into Kafka to
> > re-attempt pre-processing.
> >
> > Thanks and happy Friday,
> > David
> >
> >
> >
> >
> >
> > On Fri, Feb 10, 2023 at 8:23 AM Ahmed Abdalla 
> > wrote:
> >
> > > Hi David,
> > >
> > > That's a very interesting KIP and I wanted to share my two cents. I
> believe
> > > there's a lot of value and use cases for the ability to intercept,
> mutate
> > > and filter Kafka's messages, however I'm not sure if trying to achieve
> that
> > > via in-band interceptors is the best approach for this.
> > >
> > >- My mental model around one of Kafka's core values is the brokers'
> > >focus on a single functionality (more or less): highly available and
> > > fault
> > >tolerant commit log. I see this in many design decisions such as
> > >off-loading responsibilities to the clients (partitioner, assignor,
> > >consumer groups coordination etc).
> > >- And the impact of this KIP on the Kafka server would be adding
> another
> > >

Re: [VOTE] KIP-988 Streams StandbyUpdateListener

2023-10-23 Thread Bill Bejeck
This is a great addition

+1(binding)

-Bill

On Fri, Oct 20, 2023 at 2:29 PM Almog Gavra  wrote:

> +1 (non-binding) - great improvement, thanks Colt & Eduwer!
>
> On Tue, Oct 17, 2023 at 11:25 AM Guozhang Wang  >
> wrote:
>
> > +1 from me.
> >
> > On Mon, Oct 16, 2023 at 1:56 AM Lucas Brutschy
> >  wrote:
> > >
> > > Hi,
> > >
> > > thanks again for the KIP!
> > >
> > > +1 (binding)
> > >
> > > Cheers,
> > > Lucas
> > >
> > >
> > >
> > > On Sun, Oct 15, 2023 at 9:13 AM Colt McNealy 
> > wrote:
> > > >
> > > > Hello there,
> > > >
> > > > I'd like to call a vote on KIP-988 (co-authored by my friend and
> > colleague
> > > > Eduwer Camacaro). We are hoping to get it in before the 3.7.0
> release.
> > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-988%3A+Streams+Standby+Task+Update+Listener
> > > >
> > > > Cheers,
> > > > Colt McNealy
> > > >
> > > > *Founder, LittleHorse.dev*
> >
>


Re: [VOTE] KIP-979 Allow independently stop KRaft processes

2023-10-23 Thread Ron Dagostino
Hi Hailey.  I'm +1 (binding), but could you add a "Rejected
Alternatives" section to the KIP and mention the "--required-config "
option that we decided against and the reason why we made the decision
to reject it?  There were some other small things (dash instead of dot
in the parameter names, --node-id instead of --broker-id), but
cosmetic things like this don't warrant a mention, so I think there's
just the one thing to document.

Thanks for the KIP, and thanks for adjusting it along the way as the
discussion moved forward.

Ron


Ron

On Mon, Oct 23, 2023 at 4:00 AM Federico Valeri  wrote:
>
> +1 (non binding)
>
> Thanks.
>
> On Mon, Oct 23, 2023 at 9:48 AM Kamal Chandraprakash
>  wrote:
> >
> > +1 (non-binding). Thanks for the KIP!
> >
> > On Mon, Oct 23, 2023, 12:55 Hailey Ni  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to call a vote on KIP-979 that will allow users to independently
> > > stop KRaft processes.
> > >
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+processes
> > >
> > > Thanks,
> > > Hailey
> > >


[jira] [Created] (KAFKA-15671) Flaky test RemoteIndexCacheTest.testClearCacheAndIndexFilesWhenResizeCache

2023-10-23 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-15671:


 Summary: Flaky test 
RemoteIndexCacheTest.testClearCacheAndIndexFilesWhenResizeCache
 Key: KAFKA-15671
 URL: https://issues.apache.org/jira/browse/KAFKA-15671
 Project: Kafka
  Issue Type: Test
  Components: Tiered-Storage
Reporter: Divij Vaidya


context: [https://github.com/apache/kafka/pull/14483#issuecomment-1775107621] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #98

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 308045 lines...]
> Task :connect:json:testJar
> Task :connect:api:generateMetadataFileForMavenJavaPublication
> Task :connect:json:copyDependantLibs UP-TO-DATE
> Task :connect:json:jar UP-TO-DATE
> Task :connect:json:generateMetadataFileForMavenJavaPublication
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:json:testSrcJar
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :clients:generateMetadataFileForMavenJavaPublication
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :connect:json:publishMavenJavaPublicationToMavenLocal
> Task :connect:json:publishToMavenLocal
> Task :storage:api:compileTestJava
> Task :storage:api:testClasses
> Task :server-common:compileTestJava
> Task :server-common:testClasses
> Task :raft:compileTestJava
> Task :raft:testClasses
> Task :group-coordinator:compileTestJava
> Task :group-coordinator:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link:illegal character: "60" in "#define(String, Type, 
Importance, String, String, int, Width, String, List)"
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link:illegal character: "62" in "#define(String, Type, 
Importance, String, String, int, Width, String, List)"
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java:81:
 warning - Tag @link: can't find define(String, Type, Importance, String, 
String, int, Width, String, List) in 
org.apache.kafka.common.config.ConfigDef
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
5 warnings

> Task :metadata:compileTestJava
> Task :metadata:testClasses
> Task :clients:javadocJar
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.2.1/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD SUCCESSFUL in 8m
94 actionable tasks: 41 executed, 53 up-to-date

Publishing build scan...
https://ge.apache.org/s/vitjonmq6byno

[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.6.1-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #85

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 4564 lines...]
 Required by:
 project :streams:upgrade-system-tests-35

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/8.0.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 23s
263 actionable tasks: 213 executed, 50 up-to-date

See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.5@2/build/reports/profile/profile-2023-10-23-11-48-28.html
A fine-grained performance profile is available: use the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 11 and Scala 2.12
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala:22:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/BrokerEpochIntegrationTest.scala:27:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/DynamicConfigTest.scala:21:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/KafkaMetricReporterClusterIdTest.scala:23:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/LeaderElectionTest.scala:32:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala:25:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/ServerGenerateBrokerIdTest.scala:23:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[Warn] 
/home/jenkins/workspace/Kafka_kafka_3.5/core/src/test/scala/unit/kafka/server/ServerGenerateClusterIdTest.scala:29:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
> Task :core:testClasses
> Task :core:spotbugsTest SKIPPED
> Task :core:checkstyleTest
> Task :tools:compileTestJava
> Task :tools:testClasses
> Task :tools:spotbugsTest SKIPPED
> Task :storage:compileTestJava
> Task :storage:testClasses
> Task :storage:spotbugsTest SKIPPED
> Task :tools:checkstyleTest
> Task :tools:check
> Task :storage:checkstyleTest
> Task :storage:check
> Task :jmh-benchmarks:compileJava
> Task :jmh-benchmarks:classes
> Task :jmh-benchmarks:compileTestJava NO-SOURCE
> Task :jmh-benchmarks:testClasses UP-TO-DATE
> Task :jmh-benchmarks:checkstyleTest NO-SOURCE
> Task :jmh-benchmarks:spotbugsTest SKIPPED
> Task :jmh-benchmarks:checkstyleMain
> Task :connect:runtime:compileTestJava
> Task :connect:runtime:testClasses
> Task :connect:runtime:spotbugsTest SKIPPED
> Task :connect:mirror:compileTestJava
> Task :connect:mirror:testClasses
> Task :connect:mirror:spotbugsTest SKIPPED
> Task :connect:mirror:checkstyleTest
> Task :connect:mirror:check
> Task :connect:runtime:checkstyleTest
> Task :connect:runtime:check
> Task :streams:compileTestJava
> Task :core:spotbugsMain
> Task :jmh-benchmarks:spotbugsMain
> Task :jmh-benchmarks:check
> Task :core:check
> Task :streams:testClasses
> Task :streams:streams-scala:compileTestJava NO-SOURCE
> Task :streams:spotbugsTest SKIPPED
> Task :streams:streams-scala:compileTestScala
> Task :streams:streams-scala:testClasses
> Task :streams:streams-scala:checkstyleTest NO-SOURCE
> Task :streams:streams-scala:spotbugsTest SKIPPED
> Task :streams:streams-scala:check
> Task :streams:checkstyleTest
> Task :streams:check

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':streams:upgrade-system-tests-35:compileTestJava'.
> Could not 

[jira] [Resolved] (KAFKA-15093) Add 3.5.0 to broker/client and streams upgrade/compatibility tests

2023-10-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15093.

Fix Version/s: 3.5.2
   3.7.0
   3.6.1
   Resolution: Fixed

> Add 3.5.0 to broker/client and streams upgrade/compatibility tests
> --
>
> Key: KAFKA-15093
> URL: https://issues.apache.org/jira/browse/KAFKA-15093
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.5.2, 3.7.0, 3.6.1
>
>
> Per the penultimate bullet on the [release 
> checklist|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-Afterthevotepasses],
>  Kafka v3.5.0 is released. We should add this version to the system tests.
> Example PRs:
>  * Broker and clients: [https://github.com/apache/kafka/pull/6794]
>  * Streams: [https://github.com/apache/kafka/pull/6597/files]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2320

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 423553 lines...]

Gradle Test Run :core:test > Gradle Test Executor 88 > ZkMigrationClientTest > 
testClaimAbsentController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZkMigrationClientTest > 
testIdempotentCreateTopics() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZkMigrationClientTest > 
testIdempotentCreateTopics() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZkMigrationClientTest > 
testCreateNewTopic() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZkMigrationClientTest > 
testCreateNewTopic() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testConnection() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testConnection() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testGetAclExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testGetAclExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testReinitializeAfterAuthFailure() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testReinitializeAfterAuthFailure() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testSetAclNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testSetAclNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testConnectionLossRequestTermination() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testConnectionLossRequestTermination() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testExistsNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testExistsNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testGetDataNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testGetDataNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testConnectionTimeout() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testConnectionTimeout() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testUnresolvableConnectString() STARTED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testUnresolvableConnectString() PASSED

Gradle Test Run :core:test > Gradle Test Executor 88 > ZooKeeperClientTest > 
testGetChildrenNonExistentZNode() STARTED

Gradle Test Run :core:test 

[jira] [Created] (KAFKA-15670) KRaft controller using wrong listener to send RPC to brokers in dual-write mode

2023-10-23 Thread Luke Chen (Jira)
Luke Chen created KAFKA-15670:
-

 Summary: KRaft controller using wrong listener to send RPC to 
brokers in dual-write mode 
 Key: KAFKA-15670
 URL: https://issues.apache.org/jira/browse/KAFKA-15670
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Luke Chen


During ZK migrating to KRaft, before entering dual-write mode, the KRaft 
controller will send RPCs (i.e. UpdateMetadataRequest, LeaderAndIsrRequest, and 
StopReplicaRequest) to the brokers. Currently, we use the inter broker listener 
to send the RPC to brokers from the controller. Although these RPCs are used 
for ZK brokers, in our case, the sender is actually KRaft controller. In KRaft 
mode, the controller should talk with brokers via `controller.listener.names`, 
not `inter.broker.listener.names`. It would be surprised that the KRaft 
controller config should contain `inter.broker.listener.names`.

 
{code:java}
[2023-10-23 17:12:36,788] ERROR Encountered zk migration fault: Unhandled error 
in SendRPCsToBrokersEvent (org.apache.kafka.server.fault.LoggingFaultHandler)
kafka.common.BrokerEndPointNotAvailableException: End point with listener name 
PLAINTEXT not found for broker 0
at kafka.cluster.Broker.$anonfun$node$1(Broker.scala:94)
at scala.Option.getOrElse(Option.scala:201)
at kafka.cluster.Broker.node(Broker.scala:93)
at 
kafka.controller.ControllerChannelManager.addNewBroker(ControllerChannelManager.scala:122)
at 
kafka.controller.ControllerChannelManager.addBroker(ControllerChannelManager.scala:105)
at 
kafka.migration.MigrationPropagator.$anonfun$publishMetadata$2(MigrationPropagator.scala:97)
at 
kafka.migration.MigrationPropagator.$anonfun$publishMetadata$2$adapted(MigrationPropagator.scala:97)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:168)
at 
kafka.migration.MigrationPropagator.publishMetadata(MigrationPropagator.scala:97)
at 
kafka.migration.MigrationPropagator.sendRPCsToBrokersFromMetadataImage(MigrationPropagator.scala:217)
at 
org.apache.kafka.metadata.migration.KRaftMigrationDriver$SendRPCsToBrokersEvent.run(KRaftMigrationDriver.java:723)

{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15457) Add support for OffsetFetch version 9 in admin

2023-10-23 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15457.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Add support for OffsetFetch version 9 in admin
> --
>
> Key: KAFKA-15457
> URL: https://issues.apache.org/jira/browse/KAFKA-15457
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: David Jacot
>Assignee: Sagar Rao
>Priority: Minor
>  Labels: kip-848, kip-848-client-support, kip-848-preview
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15664) Add 3.4.0 streams upgrade/compatibility tests

2023-10-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15664.

Resolution: Fixed

> Add 3.4.0 streams upgrade/compatibility tests
> -
>
> Key: KAFKA-15664
> URL: https://issues.apache.org/jira/browse/KAFKA-15664
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 3.5.0
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Critical
> Fix For: 3.5.2, 3.7.0, 3.6.1
>
>
> Per the penultimate bullet on the release checklist, Kafka v3.4.0 is 
> released. We should add this version to the system tests.
> Example PR: https://github.com/apache/kafka/pull/6597/files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15669) Implement telemetry naming strategy

2023-10-23 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15669:
-

 Summary: Implement telemetry naming strategy
 Key: KAFKA-15669
 URL: https://issues.apache.org/jira/browse/KAFKA-15669
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal
Assignee: Apoorv Mittal


Define classes and implement telemetry metrics naming strategy for the KIP-714 
as defined here: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability#KIP714:Clientmetricsandobservability-Metricsnamingandformat]

 

The naming strategy must also support delta temporality metrics with a suffix 
in original metric name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15668) Add Opentelemetry Proto library with shadowed classes

2023-10-23 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15668:
-

 Summary: Add Opentelemetry Proto library with shadowed classes
 Key: KAFKA-15668
 URL: https://issues.apache.org/jira/browse/KAFKA-15668
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal
Assignee: Apoorv Mittal


The KIP-714 requires addition of [Java client 
dependency|https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability#KIP714:Clientmetricsandobservability-Javaclientdependencies]
 of {{{}opentelemetry-proto{}}}, also brings transitive dependency of 
{{protobuf-java.}} The dependencies should be shadowed to avoid JVM versioning 
conflicts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15667) preCheck the invalid configuration for tiered storage replication factor

2023-10-23 Thread Luke Chen (Jira)
Luke Chen created KAFKA-15667:
-

 Summary: preCheck the invalid configuration for tiered storage 
replication factor
 Key: KAFKA-15667
 URL: https://issues.apache.org/jira/browse/KAFKA-15667
 Project: Kafka
  Issue Type: Improvement
  Components: Tiered-Storage
Affects Versions: 3.6.0
Reporter: Luke Chen


`remote.log.metadata.topic.replication.factor` is a config to set the 
Replication factor of remote log metadata topic. For the `min.insync.replicas`, 
we'll use the broker config. Today, if the 
`remote.log.metadata.topic.replication.factor` < `min.insync.replicas` value, 
everything still works until new remote log metadata records created. We should 
be able to identify it when broker startup to notify users to fix the invalid 
config.


ref: 
https://kafka.apache.org/documentation/#remote_log_metadata_manager_remote.log.metadata.topic.replication.factor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15666) Добавить функцию поиска по почте

2023-10-23 Thread Eleonora (Jira)
Eleonora created KAFKA-15666:


 Summary: Добавить функцию поиска по почте
 Key: KAFKA-15666
 URL: https://issues.apache.org/jira/browse/KAFKA-15666
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Eleonora






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-979 Allow independently stop KRaft processes

2023-10-23 Thread Federico Valeri
+1 (non binding)

Thanks.

On Mon, Oct 23, 2023 at 9:48 AM Kamal Chandraprakash
 wrote:
>
> +1 (non-binding). Thanks for the KIP!
>
> On Mon, Oct 23, 2023, 12:55 Hailey Ni  wrote:
>
> > Hi all,
> >
> > I'd like to call a vote on KIP-979 that will allow users to independently
> > stop KRaft processes.
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+processes
> >
> > Thanks,
> > Hailey
> >


Re: [VOTE] KIP-979 Allow independently stop KRaft processes

2023-10-23 Thread Kamal Chandraprakash
+1 (non-binding). Thanks for the KIP!

On Mon, Oct 23, 2023, 12:55 Hailey Ni  wrote:

> Hi all,
>
> I'd like to call a vote on KIP-979 that will allow users to independently
> stop KRaft processes.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+processes
>
> Thanks,
> Hailey
>


[VOTE] KIP-979 Allow independently stop KRaft processes

2023-10-23 Thread Hailey Ni
Hi all,

I'd like to call a vote on KIP-979 that will allow users to independently
stop KRaft processes.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+processes

Thanks,
Hailey


Re: [DISCUSS] KIP-975 Docker Image for Apache Kafka

2023-10-23 Thread Vedarth Sharma
Hi Ismael,

Thanks for the correction.

We will be following the same EOL policy as Apache Kafka, hence we have
removed it from the KIP.

Thanks and regards,
Vedarth

On Mon, Oct 23, 2023 at 12:17 PM Ismael Juma  wrote:

> Sorry, I noticed a typo in my message. I meant "Additionally, we should not
> specify the EOL policy in this KIP" since it doesn't propose changing it.
>
> Ismael
>
> On Sun, Oct 22, 2023 at 10:56 PM Vedarth Sharma 
> wrote:
>
> > Hi Ismael,
> > Thanks for the valuable feedback.
> >
> >1. No docker image specific release process: This was one of our
> >considered approaches, but we thought that docker image shouldn't
> block
> > AK
> >release. Though I agree, treating docker image as another artifact for
> >every AK release makes much more sense. Hence, releasing a new version
> > of
> >Kafka for the affected branch in such a scenario is a much cleaner
> >approach. Added this as the accepted approach in the KIP.
> >2. EOL policy: Updated in the KIP.
> >
> > Thanks and regards,
> > Vedarth
> >
> > On Sun, Oct 22, 2023 at 11:20 PM Ismael Juma  wrote:
> >
> > > Hi Vedarth,
> > >
> > > I think we shouldn't introduce any new release process that is docker
> > > specific. We should consider the software in the docker image in the
> same
> > > way as consider third party dependencies today - if there is a high
> > > severity CVE affecting any of them, we aim to release a new version of
> > > Kafka for the affected branch. It would include the latest Kafka code
> > from
> > > the branch.
> > >
> > > Additionally, we should specify the EOL policy in this KIP - we are not
> > > changing it as part of it. One interesting detail is that the release
> > > document claims we support the last 3 releases, but the reality has
> been
> > a
> > > bit different - we tend to support the 2 most recent releases unless
> > it's a
> > > high severity CVE in Kafka itself (these tend to be much rarer,
> > > thankfully).
> > >
> > > Ismael
> > >
> > > On Sun, Oct 22, 2023, 10:19 AM Vedarth Sharma <
> vedarth.sha...@gmail.com>
> > > wrote:
> > >
> > > > Hi Mickael,
> > > > Thanks for going through the KIP and providing valuable feedback.
> > > >
> > > >1. We will support the latest LTS version of Java supported by
> > Apache
> > > >Kafka.
> > > >2. We will provide support for the last three releases. We've
> added
> > a
> > > >detailed example of this in the KIP under our EOL policy.
> > > >3. We can establish a nightly cron job using GitHub Actions and
> > > leverage
> > > >an open-source vulnerability scanning tool like trivy (
> > > >https://github.com/aquasecurity/trivy), to get vulnerability
> > reports
> > > on
> > > >all supported images. This tool offers a straightforward way to
> > > > integrate
> > > >vulnerability checks directly into our GitHub Actions workflow.
> > > >4. That's a good suggestion to have a GitHub Actions workflow. We
> > will
> > > >implement a GitHub Actions workflow to automate the build and
> > testing
> > > >process.
> > > >5. Regarding the release process, we observed that there isn't an
> > > >existing CI/CD pipeline. We can consider the addition of a GitHub
> > > > workflow
> > > >to facilitate the release process.
> > > >
> > > > Please let us know your thoughts on the above.
> > > >
> > > > Thanks and regards,
> > > > Vedarth
> > > >
> > > > On Fri, Oct 20, 2023 at 7:34 PM Mickael Maison <
> > mickael.mai...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi Krishna,
> > > > >
> > > > > Overall I'm supportive of having an official docker image.
> > > > > I have a few questions:
> > > > > - Can you clarify the process of selecting the Java version? Is the
> > > > > proposal to only pick LTS versions? or to pick the highest version
> > > > > supported by Kafka?
> > > > > - Once a new Kafka version is released, what happens to the image
> > > > > containing the previous release? Do we expect to still update it in
> > > > > case of CVEs? If so for how long?
> > > > > - How will we get notified that the base image has a CVE?
> > > > > - Rather than having scripts PMC members have to run from their
> > > > > machines, would it e possible to have a Jenkins job or GitHub
> action?
> > > > >
> > > > > Thanks,
> > > > > Mickael
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Oct 20, 2023 at 12:51 PM Vedarth Sharma
> > > > >  wrote:
> > > > > >
> > > > > > Hi Manikumar,
> > > > > >
> > > > > > Thanks for the feedback!
> > > > > >
> > > > > > 1. We propose the addition of a new directory named "docker" at
> the
> > > > root
> > > > > of
> > > > > > the repository, where all Docker-related code will be stored. A
> > > > detailed
> > > > > > directory structure has been added in the KIP.
> > > > > > 2. We request the creation of an Apache Kafka repository
> > > (apache/kafka)
> > > > > on
> > > > > > DockerHub, to be administered under the The Apache Software
> > > Foundation
> > > > > > 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2319

2023-10-23 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 320393 lines...]
> Task :connect:json:testJar
> Task :connect:api:copyDependantLibs UP-TO-DATE
> Task :connect:json:testSrcJar
> Task :connect:api:jar UP-TO-DATE
> Task :connect:api:generateMetadataFileForMavenJavaPublication
> Task :connect:json:copyDependantLibs UP-TO-DATE
> Task :connect:json:jar UP-TO-DATE
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:json:generateMetadataFileForMavenJavaPublication
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :clients:generateMetadataFileForMavenJavaPublication
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:json:publishMavenJavaPublicationToMavenLocal
> Task :connect:json:publishToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :storage:storage-api:compileTestJava
> Task :storage:storage-api:testClasses
> Task :server-common:compileTestJava
> Task :server-common:testClasses
> Task :raft:compileTestJava
> Task :raft:testClasses
> Task :group-coordinator:compileTestJava
> Task :group-coordinator:testClasses

> Task :clients:javadoc
/home/jenkins/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."
/home/jenkins/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
2 warnings

> Task :clients:javadocJar
> Task :metadata:compileTestJava
> Task :metadata:testClasses
> Task :core:compileScala
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.3/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD SUCCESSFUL in 4m 33s
94 actionable tasks: 41 executed, 53 up-to-date

Publishing build scan...
https://ge.apache.org/s/2vltypdd6ux5u

[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in /home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.7.0-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.7.0-SNAPSHOT/streams-quickstart-3.7.0-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.7.0-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ 

Re: [DISCUSS] KIP-975 Docker Image for Apache Kafka

2023-10-23 Thread Ismael Juma
Sorry, I noticed a typo in my message. I meant "Additionally, we should not
specify the EOL policy in this KIP" since it doesn't propose changing it.

Ismael

On Sun, Oct 22, 2023 at 10:56 PM Vedarth Sharma 
wrote:

> Hi Ismael,
> Thanks for the valuable feedback.
>
>1. No docker image specific release process: This was one of our
>considered approaches, but we thought that docker image shouldn't block
> AK
>release. Though I agree, treating docker image as another artifact for
>every AK release makes much more sense. Hence, releasing a new version
> of
>Kafka for the affected branch in such a scenario is a much cleaner
>approach. Added this as the accepted approach in the KIP.
>2. EOL policy: Updated in the KIP.
>
> Thanks and regards,
> Vedarth
>
> On Sun, Oct 22, 2023 at 11:20 PM Ismael Juma  wrote:
>
> > Hi Vedarth,
> >
> > I think we shouldn't introduce any new release process that is docker
> > specific. We should consider the software in the docker image in the same
> > way as consider third party dependencies today - if there is a high
> > severity CVE affecting any of them, we aim to release a new version of
> > Kafka for the affected branch. It would include the latest Kafka code
> from
> > the branch.
> >
> > Additionally, we should specify the EOL policy in this KIP - we are not
> > changing it as part of it. One interesting detail is that the release
> > document claims we support the last 3 releases, but the reality has been
> a
> > bit different - we tend to support the 2 most recent releases unless
> it's a
> > high severity CVE in Kafka itself (these tend to be much rarer,
> > thankfully).
> >
> > Ismael
> >
> > On Sun, Oct 22, 2023, 10:19 AM Vedarth Sharma 
> > wrote:
> >
> > > Hi Mickael,
> > > Thanks for going through the KIP and providing valuable feedback.
> > >
> > >1. We will support the latest LTS version of Java supported by
> Apache
> > >Kafka.
> > >2. We will provide support for the last three releases. We've added
> a
> > >detailed example of this in the KIP under our EOL policy.
> > >3. We can establish a nightly cron job using GitHub Actions and
> > leverage
> > >an open-source vulnerability scanning tool like trivy (
> > >https://github.com/aquasecurity/trivy), to get vulnerability
> reports
> > on
> > >all supported images. This tool offers a straightforward way to
> > > integrate
> > >vulnerability checks directly into our GitHub Actions workflow.
> > >4. That's a good suggestion to have a GitHub Actions workflow. We
> will
> > >implement a GitHub Actions workflow to automate the build and
> testing
> > >process.
> > >5. Regarding the release process, we observed that there isn't an
> > >existing CI/CD pipeline. We can consider the addition of a GitHub
> > > workflow
> > >to facilitate the release process.
> > >
> > > Please let us know your thoughts on the above.
> > >
> > > Thanks and regards,
> > > Vedarth
> > >
> > > On Fri, Oct 20, 2023 at 7:34 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Krishna,
> > > >
> > > > Overall I'm supportive of having an official docker image.
> > > > I have a few questions:
> > > > - Can you clarify the process of selecting the Java version? Is the
> > > > proposal to only pick LTS versions? or to pick the highest version
> > > > supported by Kafka?
> > > > - Once a new Kafka version is released, what happens to the image
> > > > containing the previous release? Do we expect to still update it in
> > > > case of CVEs? If so for how long?
> > > > - How will we get notified that the base image has a CVE?
> > > > - Rather than having scripts PMC members have to run from their
> > > > machines, would it e possible to have a Jenkins job or GitHub action?
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > >
> > > >
> > > > On Fri, Oct 20, 2023 at 12:51 PM Vedarth Sharma
> > > >  wrote:
> > > > >
> > > > > Hi Manikumar,
> > > > >
> > > > > Thanks for the feedback!
> > > > >
> > > > > 1. We propose the addition of a new directory named "docker" at the
> > > root
> > > > of
> > > > > the repository, where all Docker-related code will be stored. A
> > > detailed
> > > > > directory structure has been added in the KIP.
> > > > > 2. We request the creation of an Apache Kafka repository
> > (apache/kafka)
> > > > on
> > > > > DockerHub, to be administered under the The Apache Software
> > Foundation
> > > > > . The PMC members should have the
> > > > > necessary permissions for pushing updates to the docker repo.
> > > > >
> > > > > Thanks and regards,
> > > > > Vedarth
> > > > >
> > > > >
> > > > > On Fri, Oct 20, 2023 at 2:44 PM Manikumar <
> manikumar.re...@gmail.com
> > >
> > > > wrote:
> > > > >
> > > > > > Hi Krishna, Vedarth,
> > > > > >
> > > > > > Thanks for the KIP.
> > > > > >
> > > > > > 1. Can we add directory structure of Docker Image related files
> in
> > > > Kafka
> > > > > >