Re: [DISCUSS] KIP-936: Throttle number of active PIDs

2023-08-21 Thread Claude Warren
I misspoke before the LayedBloomFilterTest.testExpiration() uses
milliseconds to expire the data but it layout an example of how to expire
filters in time intervals.

On Fri, Aug 18, 2023 at 4:01 PM Claude Warren  wrote:

> Sorry for taking so long to get back to you, somehow I missed your message.
>
> I am not sure how this will work when we have different producer-id-rate
>> for different KafkaPrincipal as proposed in the KIP.
>> For example `userA` had producer-id-rate of 1000 per hour while `user2`
>> has
>> a quota of 100 producer ids per hour. How will we configure the max
>> entries
>> for the Shape?
>>
>
> I am not certain I have a full understanding of your network.  However, I
> am assuming that you want:
>
>- To ensure that all produced ids are tracked for 1 hour regardless of
>whether they were produced by userA or userB.
>- To use a sliding window with 1 minute resolution.
>
>
> There is a tradeoff in the Layered Bloom filter -- larger max entries (N)
> or greater depth.
>
> So the simplest calculation would be 1100 messages per hour / 60 minutes
> per hour = 18.3, let's round that to 20.
> With an N=20 if more than 20 ids are produced in a minute a second filter
> will be created to accept all those over 20.
> Let's assume that the first filter was created at time 0:00:00  and the
> 21st id comes in at 0:00:45.  When the first insert after 1:00:59 occurs
> (one hour after start + window time) the first filter will be removed.
> When the first insert after 1:01:44 occurs the filter created at 0:00:45
> will be removed.
>
> So if you have a period of high usage the number of filters (depth of the
> layers) increases, as the usage decreases, the numbers go back to expected
> depths.  You could set the N to a much larger number and each filter would
> handle more ids before an extra layer was added.  However, if they are
> vastly too big then there will be significant wasted space.
>
> The only thing that comes to my mind to maintain this desired behavior in
>> the KIP is to NOT hash PID with KafkaPrincipal and keep a
>> Map
>> then each one of these bloom filters is controlled with
>> `Shape(, 0.1)`.
>>
>
> Do you need to keep a list for each principal?  are the PID's supposed to
> be globally unique?  If the question you are asking is has principal_1 seen
> pid_2 then hashing principal_1 and pid_2 together and creating a bloom
> filter will tell you using one LayeredBloomFilter.  If you also need to
> ask: "has anybody seen pid_2?", then there are some other solutions.   You
> solution will work and may be appropriate in some cases where there is a
> wide range of principal message rates.  But in that case I would probably
> still use the principal+pid solution and just split the filters by
> estimated size so that all the ones that need a large filter go into one
> system, and the smaller ones go into another. I do note that the hurst
> calculator [1] shows that for (1000, 0.1) you need  599 bytes and 3 hash
> functions.  (100,0.1) you need 60 bytes and 3 hash functions, for (1100,
> 0.1) you need 659 bytes and 3 hash functions.  I would probably pick 704
> bytes and 3 hash functions which gives you (1176, 0.1).  I would pick this
> because 704 divides evenly into 64bit long blocks that are used internally
> for the SimpleBloomFilter so there is no wasted space.
>
> Maybe am missing something here but I can't find anything in the
>> `LayerManager` code that point to how often will the eviction function
>> runs. Do you mean that the eviction function runs every minute? If so, can
>> we control this?
>>
>
> The LayerManager.Builder has a setCleanup() method.  That method is run
> whenever a new layer is added to the filter.  This means that you can use
> whatever process you want to delete old filters (including none:
> LayerManager.Cleanup.noCleanup()).  The LayeredBloomFilterTest is an
> example of advancing by time (the one minute intervals) and cleaning by
> time (1 hr).  It also creates a TimestampedBloomFilter to track the time.
> If we need an explicit mechanism to remove filters from the LayerManager we
> can probably add one.
>
> I hope this answers your questions.
>
> I am currently working on getting layered Bloom filters added to commons.
> A recent change set the stage for this change so it should be in soon.
>
> I look forward to hearing from you,
> Claude
>
> [1] https://hur.st/bloomfilter/?n=1000&p=0.1&m=&k=
>
> On Sun, Jul 16, 2023 at 1:00 PM Omnia Ibrahim 
> wrote:
>
>> Thanks Claude for the feedback and the raising this implementation to
>> Apache commons-collections.
>> I had a look into your layered bloom filter and at first glance, I think
>> it
>> would be a better improvement, however, regarding the following suggestion
>>
>> > By hashing the principal and PID into the filter as a single hash only
>> one Bloom filter is required.
>>
>> I am not sure how this will work when we have different producer-id-rate
>> for different KafkaPrincipal as proposed in the KI

Re: [DISCUSS] KIP-967: Support custom SSL configuration for Kafka Connect RestServer

2023-08-21 Thread Николай Ижиков
Hello, Taras.

I found this KIP useful.
We already has an ability to setup custom SslEngineFactory via 
‘ssl.engine.factory.class'
So it’s looks logical to extend this feature to connect rest.

AFAIK many organization adopts custom SSL storage like HashiCorp Vault or 
similar so native integration will be useful

> 14 авг. 2023 г., в 12:42, Taras Ledkov  написал(а):
> 
> Hi Kafka Team.
> 
> I would like to start a discussion for KIP-967: Support custom SSL 
> configuration for Kafka Connect RestServer [1].
> The purpose of this KIP is add ability to use custom SSL factory to configure 
> Kafka Connect RestServer.
> It looks like the interface 'SslEngineFactory' may be used with simple 
> adapters. 
> 
> The prototype of the patch is available on PR#14203 [2].
> It is not a final/clean patch yet. Just for demo & discuss. 
> 
> Thanks in advance for leaving a review!
> 
> [1]. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-967%3A+Support+custom+SSL+configuration+for+Kafka+Connect+RestServer
> [2]. https://github.com/apache/kafka/pull/14203
> 
> --
> With best regards,
> Taras Ledkov



Re: [VOTE] KIP-942: Add Power(ppc64le) support

2023-08-21 Thread Mickael Maison
+1 (binding)
Thanks for the KIP!

Mickael

On Mon, Aug 14, 2023 at 1:40 PM Divij Vaidya  wrote:
>
> +1 (binding)
>
> --
> Divij Vaidya
>
>
> On Wed, Jul 26, 2023 at 9:04 AM Vaibhav Nazare
>  wrote:
> >
> > I'd like to call a vote on KIP-942


[jira] [Resolved] (KAFKA-14206) Upgrade zookeeper to 3.7.1 to address security vulnerabilities

2023-08-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14206.

Fix Version/s: 3.5.0
   Resolution: Fixed

Kafka 3.5.0 uses ZooKeeper 3.6.4

> Upgrade zookeeper to 3.7.1 to address security vulnerabilities
> --
>
> Key: KAFKA-14206
> URL: https://issues.apache.org/jira/browse/KAFKA-14206
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 3.2.1
>Reporter: Valeriy Kassenbayev
>Priority: Blocker
> Fix For: 3.5.0
>
>
> Kafka 3.2.1 is using ZooKeeper, which is affected by 
> [CVE-2021-37136|https://security.snyk.io/vuln/SNYK-JAVA-IONETTY-1584064] and 
> [CVE-2021-37137:|https://www.cve.org/CVERecord?id=CVE-2021-37137]
> {code:java}
>   ✗ Denial of Service (DoS) [High 
> Severity][https://security.snyk.io/vuln/SNYK-JAVA-IONETTY-1584063] in 
> io.netty:netty-codec@4.1.63.Final
>     introduced by org.apache.kafka:kafka_2.13@3.2.1 > 
> org.apache.zookeeper:zookeeper@3.6.3 > io.netty:netty-handler@4.1.63.Final > 
> io.netty:netty-codec@4.1.63.Final
>   This issue was fixed in versions: 4.1.68.Final
>   ✗ Denial of Service (DoS) [High 
> Severity][https://security.snyk.io/vuln/SNYK-JAVA-IONETTY-1584064] in 
> io.netty:netty-codec@4.1.63.Final
>     introduced by org.apache.kafka:kafka_2.13@3.2.1 > 
> org.apache.zookeeper:zookeeper@3.6.3 > io.netty:netty-handler@4.1.63.Final > 
> io.netty:netty-codec@4.1.63.Final
>   This issue was fixed in versions: 4.1.68.Final {code}
> The issues were fixed in the next versions of ZooKeeper (starting from 
> 3.6.4). ZooKeeper 3.7.1 is the next stable 
> [release|https://zookeeper.apache.org/releases.html] at the moment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Apache Kafka 3.6.0 release

2023-08-21 Thread Satish Duggana
Hi,
3.6 branch is created. Please make sure any PRs targeted for 3.6.0
should be merged to 3.6 branch once those are merged to trunk.

Thanks,
Satish.

On Wed, 16 Aug 2023 at 15:58, Satish Duggana  wrote:
>
> Hi,
> Please plan to merge PRs(including the major features) targeted for
> 3.6.0 by the end of Aug 20th UTC. Starting from August 21st, any pull
> requests intended for the 3.6.0 release must include the changes
> merged into the 3.6 branch as mentioned in the release plan.
>
> Thanks,
> Satish.
>
> On Fri, 4 Aug 2023 at 18:39, Chris Egerton  wrote:
> >
> > Thanks for adding KIP-949, Satish!
> >
> > On Fri, Aug 4, 2023 at 7:06 AM Satish Duggana 
> > wrote:
> >
> > > Hi,
> > > Myself and Divij discussed and added the wiki for Kafka TieredStorage
> > > Early Access Release[1]. If you have any comments or feedback, please
> > > feel free to share them.
> > >
> > > 1.
> > > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes
> > >
> > > Thanks,
> > > Satish.
> > >
> > > On Fri, 4 Aug 2023 at 08:40, Satish Duggana 
> > > wrote:
> > > >
> > > > Hi Chris,
> > > > Thanks for the update. This looks to be a minor change and is also
> > > > useful for backward compatibility. I added it to the release plan as
> > > > an exceptional case.
> > > >
> > > > ~Satish.
> > > >
> > > > On Thu, 3 Aug 2023 at 21:34, Chris Egerton 
> > > wrote:
> > > > >
> > > > > Hi Satish,
> > > > >
> > > > > Would it be possible to include KIP-949 (
> > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-949%3A+Add+flag+to+enable+the+usage+of+topic+separator+in+MM2+DefaultReplicationPolicy
> > > )
> > > > > in the 3.6.0 release? It passed voting yesterday, and is a very small,
> > > > > low-risk change that we'd like to put out as soon as possible in order
> > > to
> > > > > patch an accidental break in backwards compatibility caused a few
> > > versions
> > > > > ago.
> > > > >
> > > > > Best,
> > > > >
> > > > > Chris
> > > > >
> > > > > On Fri, Jul 28, 2023 at 2:35 AM Satish Duggana <
> > > satish.dugg...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > > Whoever has KIP entries in the 3.6.0 release plan. Please update it
> > > > > > with the latest status by tomorrow(end of the day 29th Jul UTC ).
> > > > > >
> > > > > > Thanks
> > > > > > Satish.
> > > > > >
> > > > > > On Fri, 28 Jul 2023 at 12:01, Satish Duggana <
> > > satish.dugg...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > Thanks Ismael and Divij for the suggestions.
> > > > > > >
> > > > > > > One way was to follow the earlier guidelines that we set for any
> > > early
> > > > > > > access release. It looks Ismael already mentioned the example of
> > > > > > > KRaft.
> > > > > > >
> > > > > > > KIP-405 mentions upgrade/downgrade and limitations sections. We 
> > > > > > > can
> > > > > > > clarify that in the release notes for users on how this feature
> > > can be
> > > > > > > used for early access.
> > > > > > >
> > > > > > > Divij, We do not want users to enable this feature on production
> > > > > > > environments in early access release. Let us work together on the
> > > > > > > followups Ismael suggested.
> > > > > > >
> > > > > > > ~Satish.
> > > > > > >
> > > > > > > On Fri, 28 Jul 2023 at 02:24, Divij Vaidya <
> > > divijvaidy...@gmail.com>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > Those are great suggestions, thank you. We will continue this
> > > > > > discussion
> > > > > > > > forward in a separate KIP for release plan for Tiered Storage.
> > > > > > > >
> > > > > > > > On Thu 27. Jul 2023 at 21:46, Ismael Juma 
> > > wrote:
> > > > > > > >
> > > > > > > > > Hi Divij,
> > > > > > > > >
> > > > > > > > > I think the points you bring up for discussion are all good.
> > > My main
> > > > > > > > > feedback is that they should be discussed in the context of
> > > KIPs vs
> > > > > > the
> > > > > > > > > release template. That's why we have a backwards compatibility
> > > > > > section for
> > > > > > > > > every KIP, it's precisely to ensure we think carefully about
> > > some of
> > > > > > the
> > > > > > > > > points you're bringing up. When it comes to defining the
> > > meaning of
> > > > > > early
> > > > > > > > > access, we have two options:
> > > > > > > > >
> > > > > > > > > 1. Have a KIP specifically for tiered storage.
> > > > > > > > > 2. Have a KIP to define general guidelines for what early
> > > access
> > > > > > means.
> > > > > > > > >
> > > > > > > > > Does this make sense?
> > > > > > > > >
> > > > > > > > > Ismael
> > > > > > > > >
> > > > > > > > > On Thu, Jul 27, 2023 at 6:38 PM Divij Vaidya <
> > > > > > divijvaidy...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Thank you for the response, Ismael.
> > > > > > > > > >
> > > > > > > > > > 1. Specifically in context of 3.6, I wanted this
> > > compatibility
> > > > > > > > > > guarantee point to encourage a discussion on
> > > > > > > > > >
> > > > > > > > > >
> 

[jira] [Created] (KAFKA-15387) Deprecate and remove Connect's duplicate task configurations retrieval endpoint

2023-08-21 Thread Yash Mayya (Jira)
Yash Mayya created KAFKA-15387:
--

 Summary: Deprecate and remove Connect's duplicate task 
configurations retrieval endpoint
 Key: KAFKA-15387
 URL: https://issues.apache.org/jira/browse/KAFKA-15387
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Reporter: Yash Mayya
Assignee: Yash Mayya
 Fix For: 4.0.0


A new endpoint ({{{}GET /connectors/\{connector}/tasks-config){}}} was added to 
Kafka Connect's REST API to expose task configurations in 
[KIP-661|https://cwiki.apache.org/confluence/display/KAFKA/KIP-661%3A+Expose+task+configurations+in+Connect+REST+API].
 However, the original patch for Kafka Connect's REST API had already added an 
endpoint ({{{}GET /connectors/\{connector}/tasks){}}} to retrieve the list of a 
connector's tasks and their configurations (ref - 
[https://github.com/apache/kafka/pull/378] , 
https://issues.apache.org/jira/browse/KAFKA-2369) and this was missed in 
KIP-661. We can deprecate the endpoint added by KIP-661 in 3.7 (the next minor 
AK release) and remove it in 4.0 (the next major AK release) since it's 
redundant to have two separate endpoints to expose task configurations. Related 
discussions in 
[https://github.com/apache/kafka/pull/13424#discussion_r1144727886] and 
https://issues.apache.org/jira/browse/KAFKA-15377 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15388) Handle topics that were having compaction as retention earlier are changed to delete only retention policy and onboarded to tiered storage.

2023-08-21 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-15388:
--

 Summary: Handle topics that were having compaction as retention 
earlier are changed to delete only retention policy and onboarded to tiered 
storage. 
 Key: KAFKA-15388
 URL: https://issues.apache.org/jira/browse/KAFKA-15388
 Project: Kafka
  Issue Type: Task
Reporter: Satish Duggana






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-953: partition method to be overloaded to accept headers as well.

2023-08-21 Thread Ismael Juma
Hi Jack,

I mean a DTO. That means you can add additional parameters later without
breaking compatibility. The current proposal would result in yet another
method each time we need to add parameters.

Ismael

On Sun, Aug 20, 2023 at 4:53 AM Jack Tomy  wrote:

> Hey Ismael,
>
> Are you suggesting to pass a param like a DTO or you are suggesting to pass
> the record object?
>
> I would also like to hear other devs' opinions on this as I personally
> favour what is done currently.
>
> On Thu, Aug 17, 2023 at 9:34 AM Ismael Juma  wrote:
>
> > Hi,
> >
> > Thanks for the KIP. The problem outlined here is a great example why we
> > should be using a record-like structure to pass the parameters to a
> method
> > like this. Then we can add more parameters without having to introduce
> new
> > methods. Have we considered this option?
> >
> > Ismael
> >
> > On Mon, Aug 7, 2023 at 5:26 AM Jack Tomy  wrote:
> >
> > > Hey everyone.
> > >
> > > I would like to call for a vote on KIP-953: partition method to be
> > > overloaded to accept headers as well.
> > >
> > > KIP :
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263424937
> > > Discussion thread :
> > > https://lists.apache.org/thread/0f20kvfqkmhdqrwcb8vqgqn80szcrcdd
> > >
> > > Thanks
> > > --
> > > Best Regards
> > > *Jack*
> > >
> >
>
>
> --
> Best Regards
> *Jack*
>


[jira] [Created] (KAFKA-15389) MetadataLoader may publish an empty image on first start

2023-08-21 Thread David Arthur (Jira)
David Arthur created KAFKA-15389:


 Summary: MetadataLoader may publish an empty image on first start
 Key: KAFKA-15389
 URL: https://issues.apache.org/jira/browse/KAFKA-15389
 Project: Kafka
  Issue Type: Bug
Reporter: David Arthur


When first loading from an empty log, there is a case where MetadataLoader can 
publish an image before the bootstrap records are processed. This isn't exactly 
incorrect, since all components implicitly start from the empty image state, 
but it might be unexpected for some MetadataPublishers. 

 

For example, in KRaftMigrationDriver, if an old MetadataVersion is encountered, 
the driver transitions to the INACTIVE state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-939: Support Participation in 2PC

2023-08-21 Thread Roger Hoover
Hi Artem,

Thanks for writing this KIP.  Can you clarify the requirements a bit more
for managing transaction state?  It looks like the application must have
stable transactional ids over time?   What is the granularity of those ids
and producers?  Say the application is a multi-threaded Java web server,
can/should all the concurrent threads share a transactional id and
producer?  That doesn't seem right to me unless the application is using
global DB locks that serialize all requests.  Instead, if the application
uses row-level DB locks, there could be multiple, concurrent, independent
txns happening in the same JVM so it seems like the granularity managing
transactional ids and txn state needs to line up with granularity of the DB
locking.

Does that make sense or am I misunderstanding?

Thanks,

Roger

On Wed, Aug 16, 2023 at 11:40 PM Artem Livshits
 wrote:

> Hello,
>
> This is a discussion thread for
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-939%3A+Support+Participation+in+2PC
> .
>
> The KIP proposes extending Kafka transaction support (that already uses 2PC
> under the hood) to enable atomicity of dual writes to Kafka and an external
> database, and helps to fix a long standing Flink issue.
>
> An example of code that uses the dual write recipe with JDBC and should
> work for most SQL databases is here
> https://github.com/apache/kafka/pull/14231.
>
> The FLIP for the sister fix in Flink is here
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=255071710
>
> -Artem
>


Requesting permission to contribute to Apache Kafka

2023-08-21 Thread Hailey Ni
Hi,

This is Hailey. Wiki ID: hni. May I request edit permission to the Kafka
Wiki please?

Thanks,
Hailey


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2126

2023-08-21 Thread Apache Jenkins Server
See 


Changes:


--
Push event to branch trunk
Connecting to https://api.github.com using ASF Cloudbees Jenkins ci-builds
Obtained Jenkinsfile from 4b383378a0fd19d6d3c9ae7c2175fa3459661a04
[Pipeline] Start of Pipeline
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] parallel
[Pipeline] { (Branch: JDK 8 and Scala 2.12)
[Pipeline] { (Branch: JDK 11 and Scala 2.13)
[Pipeline] { (Branch: JDK 17 and Scala 2.13)
[Pipeline] { (Branch: JDK 20 and Scala 2.13)
[Pipeline] stage
[Pipeline] { (JDK 8 and Scala 2.12)
[Pipeline] stage
[Pipeline] { (JDK 11 and Scala 2.13)
[Pipeline] stage
[Pipeline] { (JDK 17 and Scala 2.13)
[Pipeline] stage
[Pipeline] { (JDK 20 and Scala 2.13)
[Pipeline] timeout
Timeout set to expire in 8 hr 0 min
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 8 hr 0 min
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 8 hr 0 min
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 8 hr 0 min
[Pipeline] {
[Pipeline] timestamps
[Pipeline] {
[Pipeline] timestamps
[Pipeline] {
[Pipeline] timestamps
[Pipeline] {
[Pipeline] timestamps
[Pipeline] {
[Pipeline] node
[Pipeline] node
[Pipeline] node
[2023-08-21T15:44:18.954Z] Running on builds38 in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2
[Pipeline] node
[2023-08-21T15:44:18.964Z] Running on builds31 in 
/home/jenkins/workspace/Kafka_kafka_trunk
[Pipeline] {
[Pipeline] {
[2023-08-21T15:44:18.978Z] Running on builds40 in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk
[2023-08-21T15:44:18.978Z] Running on builds57 in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk
[Pipeline] {
[Pipeline] {
[Pipeline] checkout
[Pipeline] checkout
[Pipeline] checkout
[Pipeline] checkout
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 20 and Scala 2.13
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 17 and Scala 2.13
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 11 and Scala 2.13
Cancelling nested steps due to timeout
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 8 and Scala 2.12
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] node
Running on builds57 in /home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk
[Pipeline] {
[Pipeline] step


Re: Requesting permission to contribute to Apache Kafka

2023-08-21 Thread Justine Olshan
Hey Hailey,
You should have permissions now!

Justine

On Mon, Aug 21, 2023 at 2:11 PM Hailey Ni  wrote:

> Hi,
>
> This is Hailey. Wiki ID: hni. May I request edit permission to the Kafka
> Wiki please?
>
> Thanks,
> Hailey
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2127

2023-08-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 304638 lines...]
Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testUpdateExistingPartitions() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testUpdateExistingPartitions() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testEmptyWrite() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testEmptyWrite() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testReadAndWriteProducerId() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testReadAndWriteProducerId() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateTopicConfigs() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateTopicConfigs() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateEmptyZk() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateEmptyZk() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAbsentController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAbsentController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testIdempotentCreateTopics() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testIdempotentCreateTopics() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testCreateNewTopic() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testCreateNewTopic() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testConnection() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testConnection() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetAclExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetAclExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Ex

[jira] [Created] (KAFKA-15390) FetchResponse.preferredReplica may contains fenced replica in KRaft mode

2023-08-21 Thread Deng Ziming (Jira)
Deng Ziming created KAFKA-15390:
---

 Summary: FetchResponse.preferredReplica may contains fenced 
replica in KRaft mode
 Key: KAFKA-15390
 URL: https://issues.apache.org/jira/browse/KAFKA-15390
 Project: Kafka
  Issue Type: Bug
Reporter: Deng Ziming
Assignee: Deng Ziming


`KRaftMetadataCache.getPartitionReplicaEndpoints` will return a fenced broker 
id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Need more clarity in documentation for upgrade/downgrade procedures and limitations across releases.

2023-08-21 Thread Kaushik Srinivas (Nokia)
Hi Team,

Referring to the upgrade documentation for apache kafka.

https://kafka.apache.org/34/documentation.html#upgrade_3_4_0

There is confusion with respect to below statements from the above sectioned 
link of apache docs.

"If you are upgrading from a version prior to 2.1.x, please see the note below 
about the change to the schema used to store consumer offsets. Once you have 
changed the inter.broker.protocol.version to the latest version, it will not be 
possible to downgrade to a version prior to 2.1."

The above statement mentions that the downgrade would not be possible to 
version prior to "2.1" in case of "upgrading the inter.broker.protocol.version 
to the latest version".

But, there is another statement made in the documentation in point 4 as below

"Restart the brokers one by one for the new protocol version to take effect. 
Once the brokers begin using the latest protocol version, it will no longer be 
possible to downgrade the cluster to an older version."



These two statements are repeated across a lot of prior releases of kafka and 
is confusing.

Below are the questions:

  1.  Is downgrade not at all possible to "any" older version of kafka once the 
inter.broker.protocol.version is updated to latest version OR downgrades are 
not possible only to versions "<2.1" ?
  2.  Suppose one takes an approach similar to upgrade even for the downgrade 
path. i.e. downgrade the inter.broker.protocol.version first to the previous 
version, next downgrade the software/code of kafka to previous release 
revision. Does downgrade work with this approach ?

Can these two questions be documented if the results are already known ?

Regards,
Kaushik.