Re: [VOTE] KIP-782: Expandable batch size in producer

2021-11-09 Thread Luke Chen
Hi devs,
Bump this thread.
Call for vote for: KIP-782: Expandable batch size in producer.

The main goal for this KIP is:
1. higher throughput in producer
2. better memory usage in producer

Detailed description can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-782%3A+Expandable+batch+size+in+producer

Any feedback and comments is welcome.

Thank you.
Luke

On Fri, Nov 5, 2021 at 4:37 PM Luke Chen  wrote:

> Hi Mickael,
> Thanks for the good comments! Answering them below:
>
> - When under load, the producer may allocate extra buffers. Are these
> buffers ever released if the load drops?
> --> This is a good point that I've never considered before. Yes, after
> introducing the "batch.max.size", we should release some buffer out of the
> buffer pools. In this KIP, we'll only keep maximum "batch.size" into pool,
> and mark the rest of memory as free to use. The reason we keep maximum
> "batch.size" back to pool is because the semantic of "batch.size" is the
> batch full limit. In most cases, the batch.size should be able to contain
> the records to be sent within linger.ms time.
>
> - Do we really need batch.initial.size? It's not clear that having this
> extra setting adds a lot of value.
> --> I think "batch.initial.size" is important to achieve higher memory
> usage. Now, I made the default value to 4KB, so after upgrading to the new
> release, the producer memory usage will become better.
>
> I've updated the KIP.
>
> Thank you.
> Luke
>
> On Wed, Nov 3, 2021 at 6:44 PM Mickael Maison 
> wrote:
>
>> Hi Luke,
>>
>> Thanks for the KIP. It looks like an interesting idea. I like the
>> concept of dynamically adjusting settings to handle load. I wonder if
>> other client settings could also benefit from a similar logic.
>>
>> Just a couple of questions:
>> - When under load, the producer may allocate extra buffers. Are these
>> buffers ever released if the load drops?
>> - Do we really need batch.initial.size? It's not clear that having
>> this extra setting adds a lot of value.
>>
>> Thanks,
>> Mickael
>>
>> On Tue, Oct 26, 2021 at 11:12 AM Luke Chen  wrote:
>> >
>> > Thank you, Artem!
>> >
>> > @devs, welcome to vote for this KIP.
>> > Key proposal:
>> > 1. allocate multiple smaller initial batch size buffer in producer, and
>> > list them together when expansion for better memory usage
>> > 2. add a max batch size config in producer, so when producer rate is
>> > suddenly high, we can still have high throughput with batch size larger
>> > than "batch.size" (and less than "batch.max.size", where "batch.size" is
>> > soft limit and "batch.max.size" is hard limit)
>> > Here's the updated KIP:
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-782%3A+Expandable+batch+size+in+producer
>> >
>> > And, any comments and feedback are welcome.
>> >
>> > Thank you.
>> > Luke
>> >
>> > On Tue, Oct 26, 2021 at 6:35 AM Artem Livshits
>> >  wrote:
>> >
>> > > Hi Luke,
>> > >
>> > > I've looked at the updated KIP-782, it looks good to me.
>> > >
>> > > -Artem
>> > >
>> > > On Sun, Oct 24, 2021 at 1:46 AM Luke Chen  wrote:
>> > >
>> > > > Hi Artem,
>> > > > Thanks for your good suggestion again.
>> > > > I've combined your idea into this KIP, and updated it.
>> > > > Note, in the end, I still keep the "batch.initial.size" config
>> (default
>> > > is
>> > > > 0, which means "batch.size" will be initial batch size) for better
>> memory
>> > > > conservation.
>> > > >
>> > > > Detailed description can be found here:
>> > > >
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-782%3A+Expandable+batch+size+in+producer
>> > > >
>> > > > Let me know if you have other suggestions.
>> > > >
>> > > > Thank you.
>> > > > Luke
>> > > >
>> > > > On Sat, Oct 23, 2021 at 10:50 AM Luke Chen 
>> wrote:
>> > > >
>> > > >> Hi Artem,
>> > > >> Thanks for the suggestion. Let me confirm my understanding is
>> correct.
>> > > >> So, what you suggest is that the "batch.size" is more like a "soft
>> > > limit"
>> > > >> batch size, and the "hard limit" is "batch.max.size". When
>> reaching the
>> > > >> batch.size of the buffer, it means the buffer is "ready" to be be
>> sent.
>> > > But
>> > > >> before the linger.ms reached, if there are more data coming, we
>> can
>> > > >> still accumulate it into the same buffer, until it reached the
>> > > >> "batch.max.size". After it reached the "batch.max.size", we'll
>> create
>> > > >> another batch for it.
>> > > >>
>> > > >> So after your suggestion, we won't need the "batch.initial.size",
>> and we
>> > > >> can use "batch.size" as the initial batch size. We list each
>> > > "batch.size"
>> > > >> together, until it reached "batch.max.size". Something like this:
>> > > >>
>> > > >> [image: image.png]
>> > > >> Is my understanding correct?
>> > > >> If so, that sounds good to me.
>> > > >> If not, please kindly explain more to me.
>> > > >>
>> > > >> Thank you.
>> > > >> Luke
>> > > >>
>> > > >>
>> > > >>
>> > > >>
>> > > >> On Sat, Oct 23, 2021 at 2:13 AM

[jira] [Resolved] (KAFKA-10543) Convert KTable joins to new PAPI

2021-11-09 Thread Jorge Esteban Quilcate Otoya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Esteban Quilcate Otoya resolved KAFKA-10543.
--
Resolution: Fixed

https://github.com/apache/kafka/pull/11412

> Convert KTable joins to new PAPI
> 
>
> Key: KAFKA-10543
> URL: https://issues.apache.org/jira/browse/KAFKA-10543
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: John Roesler
>Assignee: Jorge Esteban Quilcate Otoya
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] KIP-719: Add Log4J2 Appender

2021-11-09 Thread Dongjin Lee
Hi Mickael,

I greatly appreciate you for reading the proposal so carefully! I wrote it
quite a while ago and rechecked it today.

> Is the KIP proposing to replace the existing log4-appender or simply add
a new one for log4j2? Reading the KIP and with its current title, it's not
entirely explicit.

Oh, After re-reading it, I realized that this is not clear. Let me clarify;

1. Provide a lo4j2 equivalent of traditional log4j-appender,
log4j2-appender.
2. Migrate the modules depending on log4j-appender (i.e., tools, trogdor,
shell) into log4j2-appender, removing log4j-appender from dependencies.
3. Entirely remove log4j-appender from the project dependencies, along with
log4j.

I think log4j-appender may be published for every new release like before,
but the committee should make a decision on the policy.

> Under Rejected Alternative, the KIP states: "the Kafka appender provided
by log4j2 community stores log message in the Record key". Looking at the
code, it looks like the log message is stored in the Record value:
https://github.com/apache/logging-log4j2/blob/master/log4j-kafka/src/main/java/org/apache/logging/log4j/kafka/appender/KafkaManager.java#L135
Am I missing something?

It's totally my fault; I confused it with another appender. The
compatibility problem in the logging-log4j2 Kafka appender is not the
format but the configuration. logging-log4j2 Kafka appender supports
`properties` configuration, which will be directly used to instantiate a
Kafka producer. However, log4j-appender has been using non-producer config
names like brokerList (=bootstrap.servers), requiredNumAcks (=acks).
Instead, logging-log4j2 Kafka appender supports retryCount,
sendEventTimestamp.

On second thought, using logging-log4j2 Kafka appender internally and
making log4j2-appender to focus on compatibility facade only would be a
better approach; As I described above, the goal of this module is just
keeping the backward-compatibility, and (as you pointed out) the current
implementation has little value. Since org.apache.logging.log4j:log4j-core
already includes Kafka appender, we can make use of the 'proven wheel'
without adding more dependencies. I have not tried it yet, but I think it
is well worth it. (One additional advantage of this approach is providing a
bridge to the users who hope to move from/into logging-log4j2 Kafka
appender.)

> As the current log4j-appender is not even deprecated yet, in theory we
can't remove it till Kafka 4. If we want to speed up the process, I wonder
if the lack of documentation and a migration guide could help us. What do
you think?

In fact, this is what I am doing nowadays. While working with
log4j-appender, I found that despite a lack of documentation, considerable
users are already using it[^1][^2][^3][^4][^5]. So, I think providing a
documentation to those who are already using log4j-appender is
indispensable. It should include:

- What is the difference between log4j-appender vs. log4j2-appender.
- Which options are supported and deprecated.
- Exemplar configurations that show how to migrate.

Here is the summary:

1. The goal of this proposal is to replace the traditional log4j-appender
for compatibility concerns. But log4j-appender may be published after the
deprecation.
2. As of present, the description about logging-log4j2 Kafka appender is
entirely wrong. The problem is interface compatibility, not record format.
Focusing on the compatibility facade is a good approach.
3. A documentation focus on migration should be provided.

If you have any questions or suggestions, don't hesitate to tell me. Thanks
again for your comments!

Best,
Dongjin

[^1]:
https://docs.cloudera.com/csa/1.2.0/monitoring/topics/csa-kafka-logging.html
[^2]:
https://stackoverflow.com/questions/22034895/how-to-use-kafka-0-8-log4j-appender
[^3]:
https://stackoverflow.com/questions/32402405/delay-in-kafka-log4j-appender
[^4]:
https://stackoverflow.com/questions/32301129/kafka-log4j-appender-not-sending-messages
[^5]:
https://stackoverflow.com/questions/35628706/kafka-log4j-appender-0-9-does-not-work

On Mon, Nov 8, 2021 at 9:04 PM Mickael Maison 
wrote:

> Hi Dongjin,
>
> Thanks for working on the update to log4j2, it's definitively
> something we should complete.
> I have a couple of comments:
>
> 1) Is the KIP proposing to replace the existing log4-appender or
> simply add a new one for log4j2? Reading the KIP and with its current
> title, it's not entirely explicit. For example I don't see a statement
> under the proposed changes section. The PR seems to only add a new
> appender but the KIP mentions we want to fully remove dependencies to
> log4j.
>
> 2) Under Rejected Alternative, the KIP states: "the Kafka appender
> provided by log4j2 community stores log message in the Record key".
> Looking at the code, it looks like the log message is stored in the
> Record value:
> https://github.com/apache/logging-log4j2/blob/master/log4j-kafka/src/main/java/org/apache/logging/log4j/kafka/appender/KafkaManager.java

Re: Allowing both IPv4 and IPv6 connections on the same port

2021-11-09 Thread Matthew de Detrich
So I have just created a PR implementation this that does not require
adding a new SecurityProtocol as originally suggested, i.e. you can use
PLAINTEXT://127.0.0.1:9092,PLAINTEXT://[::1]:9092 but it only passes the
validation in the specific case of having 2 addresses on the same port with
one being IPv4 and the other being IPv6 otherwise the original validation
still applies.

PR is open at https://github.com/apache/kafka/pull/11478, feel free to
review/comment on it.

On Mon, Nov 1, 2021 at 11:09 PM Colin McCabe  wrote:

> Hi Matthew,
>
> I think you should create a KIP for this proposal, since it is a change to
> our public interfaces. Also, there are some subtle questions that we need
> to answer, like do the ipv4 and ipv6 interfaces that share the same port
> also have to share the same security protocol?
>
> best,
> Colin
>
>
> On Fri, Sep 24, 2021, at 01:44, Matthew de Detrich wrote:
> > Hello everyone,
> >
> > I wanted to ask if anyone has any thoughts/objections regarding Kafka
> > allowing multiple listeners of the same port as long as one is IPv4 and
> the
> > other is IPv6, i.e.
> >
> > listeners=PLAINTEXT://127.0.0.1:9092,PLAINTEXTV6://[::1]:9092
> > listener.security.protocol.map=PLAINTEXT:PLAINTEXT,PLAINTEXTV6:PLAINTEXT
> >
> > Currently Kafka enforces that every listener has to have a unique port
> > which doesn't allow such a configuration.
> >
> > Disabling this check and allowing such a configuration doesn't seem to
> > cause any issues but I may be missing potential problems with allowing
> both
> > IPv4 and IPv6 on the same port, does anyone have comments on this?
> >
> > Relevant Ticket: https://issues.apache.org/jira/browse/KAFKA-13299
> >
> > --
> >
> > Matthew de Detrich
> >
> > *Aiven Deutschland GmbH*
> >
> > Immanuelkirchstraße 26, 10405 Berlin
> >
> > Amtsgericht Charlottenburg, HRB 209739 B
> >
> > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> >
> > *m:* +491603708037
> >
> > *w:* aiven.io *e:* matthew.dedetr...@aiven.io
>


-- 

Matthew de Detrich

*Aiven Deutschland GmbH*

Immanuelkirchstraße 26, 10405 Berlin

Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen

*m:* +491603708037

*w:* aiven.io *e:* matthew.dedetr...@aiven.io


订阅

2021-11-09 Thread wan...@emrubik.com




王江欢 wan...@emrubik.com
沈阳数融科技有限公司
TEL:15009881572


Re: [VOTE] KIP-791: Add Record Metadata to State Store Context

2021-11-09 Thread John Roesler
+1 (binding) from me.

Thanks, Patrick!

On Mon, 2021-11-08 at 14:08 -0800, Guozhang Wang wrote:
> +1, thanks Patrick!
> 
> 
> Guozhang
> 
> On Mon, Nov 8, 2021 at 5:44 AM Vasiliki Papavasileiou
>  wrote:
> 
> > Hi Patrick,
> > 
> > Having the recordMetadata available in the state stores is fundamental for
> > the consistency work and the proposed approach is reasonable.
> > 
> > +1 (non-binding)
> > 
> > Thank you,
> > Vicky
> > 
> > On Mon, Nov 8, 2021 at 10:00 AM Luke Chen  wrote:
> > 
> > > Hi Patrick,
> > > Thanks for the KIP.
> > > Adding RecordMetadata into StateStoreContext for offset updating makes
> > > sense to me.
> > > 
> > > +1 (non-binding)
> > > 
> > > Thank you.
> > > Luke
> > > 
> > > 
> > > On Mon, Nov 8, 2021 at 5:18 PM Patrick Stuedi
> >  > > > 
> > > wrote:
> > > 
> > > > Hi all,
> > > > 
> > > > Thanks for the feedback on KIP-791, I have updated the KIP and would
> > like
> > > > to start the voting.
> > > > 
> > > > The KIP can be found here:
> > > > https://cwiki.apache.org/confluence/x/I5BnCw
> > > > 
> > > > Please vote in this thread.
> > > > 
> > > > Thanks!
> > > > -Patrick
> > > > 
> > > 
> > 
> 
> 




Kafka-13351: nag

2021-11-09 Thread flo

Hi,

This is my first contribution to Apache Kafka. So I am not sure whether 
i am too impatient.


It would be great if somebody could review my PR.


Thanks

Florin



Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #556

2021-11-09 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-13417) Dynamic thread pool re-configurations may not get processed

2021-11-09 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13417.
-
Resolution: Fixed

> Dynamic thread pool re-configurations may not get processed
> ---
>
> Key: KAFKA-13417
> URL: https://issues.apache.org/jira/browse/KAFKA-13417
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> `DynamicBrokerConfig.updateCurrentConfig` includes the following logic to 
> update the current configuration and to let each `Reconfigurable` process the 
> update:
> {code}
> val oldConfig = currentConfig
> val (newConfig, brokerReconfigurablesToUpdate) = 
> processReconfiguration(newProps, validateOnly = false)
> if (newConfig ne currentConfig) {
>   currentConfig = newConfig
>   kafkaConfig.updateCurrentConfig(newConfig)
>   // Process BrokerReconfigurable updates after current config is updated
>   brokerReconfigurablesToUpdate.foreach(_.reconfigure(oldConfig, 
> newConfig))
> }
> {code}
> The problem here is that `currentConfig` gets initialized as `kafkaConfig` 
> which means that the first call to 
> `kafkaConfig.updateCurrentConfig(newConfig)` ends up mutating `currentConfig` 
> and consequently `oldConfig`. The problem with this is that some of the 
> `reconfigure` implementations will only apply a new configuration if the 
> value in `oldConfig` does not match the value in `newConfig`. For example, 
> here is the logic to update thread pools dynamically:
> {code}
>   override def reconfigure(oldConfig: KafkaConfig, newConfig: KafkaConfig): 
> Unit = {
> if (newConfig.numIoThreads != oldConfig.numIoThreads)
>   
> server.dataPlaneRequestHandlerPool.resizeThreadPool(newConfig.numIoThreads)
> if (newConfig.numNetworkThreads != oldConfig.numNetworkThreads)
>   server.socketServer.resizeThreadPool(oldConfig.numNetworkThreads, 
> newConfig.numNetworkThreads)
> if (newConfig.numReplicaFetchers != oldConfig.numReplicaFetchers)
>   
> server.replicaManager.replicaFetcherManager.resizeThreadPool(newConfig.numReplicaFetchers)
> if (newConfig.numRecoveryThreadsPerDataDir != 
> oldConfig.numRecoveryThreadsPerDataDir)
>   
> server.logManager.resizeRecoveryThreadPool(newConfig.numRecoveryThreadsPerDataDir)
> if (newConfig.backgroundThreads != oldConfig.backgroundThreads)
>   server.kafkaScheduler.resizeThreadPool(newConfig.backgroundThreads)
>   }
> {code}
> Because of this, the dynamic update will not get applied the first time it is 
> made. I believe subsequent updates would work correctly though because we 
> would have lost the indirect reference to `kafkaConfig`. Other than the 
> `DynamicThreadPool` configurations, it looks like the config to update 
> unclean leader election may also be affected by this bug.
> NOTE: This bug only affects kraft, which is missing the call to 
> `DynamicBrokerConfig.initialize()`. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[DISCUSS] KIP-796: Interactive Query v2

2021-11-09 Thread John Roesler
Hello all,

I'd like to start the discussion for KIP-796, which proposes
a revamp of the Interactive Query APIs in Kafka Streams.

The proposal is here:
https://cwiki.apache.org/confluence/x/34xnCw

I look forward to your feedback!

Thank you,
-John



Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #557

2021-11-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 498074 lines...]
[2021-11-10T00:03:02.939Z] > Task :raft:testClasses UP-TO-DATE
[2021-11-10T00:03:02.939Z] > Task :connect:json:testJar
[2021-11-10T00:03:02.939Z] > Task :connect:json:testSrcJar
[2021-11-10T00:03:02.939Z] > Task :metadata:compileTestJava UP-TO-DATE
[2021-11-10T00:03:02.939Z] > Task :metadata:testClasses UP-TO-DATE
[2021-11-10T00:03:02.939Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-11-10T00:03:02.939Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-11-10T00:03:02.939Z] 
[2021-11-10T00:03:02.939Z] > Task :streams:processMessages
[2021-11-10T00:03:02.939Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-11-10T00:03:02.939Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-11-10T00:03:02.939Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-11-10T00:03:02.939Z] 
[2021-11-10T00:03:02.939Z] > Task :streams:compileJava UP-TO-DATE
[2021-11-10T00:03:02.939Z] > Task :streams:classes UP-TO-DATE
[2021-11-10T00:03:02.939Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-11-10T00:03:03.874Z] > Task :streams:copyDependantLibs
[2021-11-10T00:03:03.874Z] > Task :streams:jar UP-TO-DATE
[2021-11-10T00:03:03.874Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-11-10T00:03:06.580Z] > Task :connect:api:javadoc
[2021-11-10T00:03:06.580Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-11-10T00:03:06.580Z] > Task :connect:api:jar UP-TO-DATE
[2021-11-10T00:03:06.580Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-11-10T00:03:06.580Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-11-10T00:03:06.580Z] > Task :connect:json:jar UP-TO-DATE
[2021-11-10T00:03:06.580Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-11-10T00:03:06.580Z] > Task :connect:api:javadocJar
[2021-11-10T00:03:06.580Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-11-10T00:03:06.580Z] > Task :connect:json:publishToMavenLocal
[2021-11-10T00:03:06.580Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-11-10T00:03:06.580Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-11-10T00:03:06.580Z] > Task :connect:api:testJar
[2021-11-10T00:03:06.580Z] > Task :connect:api:testSrcJar
[2021-11-10T00:03:07.517Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-11-10T00:03:07.517Z] > Task :connect:api:publishToMavenLocal
[2021-11-10T00:03:11.071Z] > Task :streams:javadoc
[2021-11-10T00:03:11.071Z] > Task :streams:javadocJar
[2021-11-10T00:03:12.000Z] > Task :clients:javadoc
[2021-11-10T00:03:12.000Z] > Task :clients:javadocJar
[2021-11-10T00:03:12.929Z] 
[2021-11-10T00:03:12.929Z] > Task :clients:srcJar
[2021-11-10T00:03:12.929Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-11-10T00:03:12.929Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_trunk/clients/src/generated/java'. Reason: 
Task ':clients:srcJar' uses this output of task ':clients:processMessages' 
without declaring an explicit or implicit dependency. This can lead to 
incorrect results being produced, depending on what order the tasks are 
executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-11-10T00:03:13.859Z] 
[2021-11-10T00:03:13.859Z] > Task :clients:testJar
[2021-11-10T00:03:13.859Z] > Task :clients:testSrcJar
[2021-11-10T00:03:13.859Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-11-10T00:03:13.859Z] > Task :clients:publishToMavenLocal
[2021-11-10T00:03:32.761Z] > Task :core:compileScala
[2021-11-10T00:04:39.703Z] > Task :core:classes
[2021-11-10T00:04:39.703Z] > Task :core:compileTestJava NO-SOURCE
[2021-11-10T00:05:01.836Z] > Task :core:compileTestScala
[2021-11-10T00:05:43.258Z] > Task :core:testClasses
[2021-11-10T00:05:59.496Z] > Task :streams:compileTestJava
[2021-11-10T00:05:59.496Z] > Task :streams:testClasses
[2021-11-10T00:05:59.496Z] > Task :streams:testJar
[2021-11-10T00:05:59.496Z] > Task :streams:testSrcJar
[2021-11-10T00:05:59.496Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[20

[jira] [Created] (KAFKA-13438) Replace EasyMock and PowerMock with Mockito for WorkerTest

2021-11-09 Thread Liam Clarke-Hutchinson (Jira)
Liam Clarke-Hutchinson created KAFKA-13438:
--

 Summary: Replace EasyMock and PowerMock with Mockito for WorkerTest
 Key: KAFKA-13438
 URL: https://issues.apache.org/jira/browse/KAFKA-13438
 Project: Kafka
  Issue Type: Sub-task
Reporter: Liam Clarke-Hutchinson






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13439) Deprecate EAGER rebalancing in Kafka Streams

2021-11-09 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-13439:
--

 Summary: Deprecate EAGER rebalancing in Kafka Streams
 Key: KAFKA-13439
 URL: https://issues.apache.org/jira/browse/KAFKA-13439
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: A. Sophie Blee-Goldman
 Fix For: 3.1.0


Cooperative rebalancing has been the default since 2.4, but we have always had 
to keep the logic for eager rebalancing around to allow users a live upgrade 
path. The current upgrade path involves two rolling bounces, the first one to 
upgrade the byte code and set the UPGRADE_FROM config to keep Kafka Streams on 
the old EAGER protocol until everyone has been upgraded, and a second rolling 
bounce to remove the config and start enabling COOPERATIVE

 

We'd like to finally remove the EAGER protocol and tackle some tech debt its 
presence has accrued, but we should first give users a warning that we intend 
to remove this and that it will require a slight change to the upgrade path for 
any users who want to upgrade from 2.3 or below: going through a "bridge" 
version between 2.4 - 3.1 in the first rolling bounce, before upgrading to the 
final version. 

We should also prepare by logging a warning in 3.1 if we see the UPGRADE_FROM 
config set, informing them that they will need to make sure to remove it before 
the EAGER protocol is removed. Then in version 3.2 (or whenever we remove it) 
we still throw an exception and shut down if a user has set the UPGRADE_FROM 
flag to a pre-2.4 version. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)