Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #26

2021-04-09 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12650) NPE in InternalTopicManager#cleanUpCreatedTopics

2021-04-09 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12650:
--

 Summary: NPE in InternalTopicManager#cleanUpCreatedTopics
 Key: KAFKA-12650
 URL: https://issues.apache.org/jira/browse/KAFKA-12650
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: A. Sophie Blee-Goldman


{code:java}
java.lang.NullPointerException
at 
org.apache.kafka.streams.processor.internals.InternalTopicManager.cleanUpCreatedTopics(InternalTopicManager.java:675)
at 
org.apache.kafka.streams.processor.internals.InternalTopicManager.maybeThrowTimeoutExceptionDuringSetup(InternalTopicManager.java:755)
at 
org.apache.kafka.streams.processor.internals.InternalTopicManager.processCreateTopicResults(InternalTopicManager.java:652)
at 
org.apache.kafka.streams.processor.internals.InternalTopicManager.setup(InternalTopicManager.java:599)
at 
org.apache.kafka.streams.processor.internals.InternalTopicManagerTest.shouldOnlyRetryNotSuccessfulFuturesDuringSetup(InternalTopicManagerTest.java:180)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12649) Expose cache resizer for dynamic memory allocation

2021-04-09 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12649:
--

 Summary: Expose cache resizer for dynamic memory allocation
 Key: KAFKA-12649
 URL: https://issues.apache.org/jira/browse/KAFKA-12649
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: A. Sophie Blee-Goldman


When we added the add/removeStreamThread() APIs to Streams, we implemented a 
cache resizer to adjust the allocated cache memory per thread accordingly. We 
could expose that to users as a public API to let them dynamically 
increase/decrease the amount of memory for the Streams cache (ie 
cache.max.bytes.buffering).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #25

2021-04-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 474870 lines...]
[2021-04-10T00:21:54.316Z] 
[2021-04-10T00:21:54.316Z] MetricsTest > testSessionExpireListenerMetrics() 
PASSED
[2021-04-10T00:21:54.316Z] 
[2021-04-10T00:21:54.316Z] MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic() STARTED
[2021-04-10T00:21:56.922Z] 
[2021-04-10T00:21:56.922Z] ControllerIntegrationTest > 
testTopicIdPersistsThroughControllerReelection() PASSED
[2021-04-10T00:21:56.922Z] 
[2021-04-10T00:21:56.922Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 STARTED
[2021-04-10T00:21:57.481Z] 
[2021-04-10T00:21:57.481Z] MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic() PASSED
[2021-04-10T00:21:57.481Z] 
[2021-04-10T00:21:57.481Z] MetricsTest > testYammerMetricsCountMetric() STARTED
[2021-04-10T00:21:59.136Z] 
[2021-04-10T00:21:59.136Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsDisabledWithNonExistingFeatureZNode()
 PASSED
[2021-04-10T00:21:59.136Z] 
[2021-04-10T00:21:59.136Z] ControllerIntegrationTest > testEmptyCluster() 
STARTED
[2021-04-10T00:21:59.589Z] 
[2021-04-10T00:21:59.589Z] MetricsTest > testYammerMetricsCountMetric() PASSED
[2021-04-10T00:21:59.589Z] 
[2021-04-10T00:21:59.589Z] MetricsTest > testClusterIdMetric() STARTED
[2021-04-10T00:22:01.192Z] 
[2021-04-10T00:22:01.192Z] ControllerIntegrationTest > testEmptyCluster() PASSED
[2021-04-10T00:22:01.192Z] 
[2021-04-10T00:22:01.192Z] ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection() STARTED
[2021-04-10T00:22:01.894Z] 
[2021-04-10T00:22:01.894Z] MetricsTest > testClusterIdMetric() PASSED
[2021-04-10T00:22:01.894Z] 
[2021-04-10T00:22:01.894Z] MetricsTest > testControllerMetrics() STARTED
[2021-04-10T00:22:03.305Z] 
[2021-04-10T00:22:03.305Z] ControllerIntegrationTest > 
testControllerMoveOnPreferredReplicaElection() PASSED
[2021-04-10T00:22:03.305Z] 
[2021-04-10T00:22:03.305Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElection() STARTED
[2021-04-10T00:22:05.115Z] 
[2021-04-10T00:22:05.115Z] MetricsTest > testControllerMetrics() PASSED
[2021-04-10T00:22:05.115Z] 
[2021-04-10T00:22:05.115Z] MetricsTest > testWindowsStyleTagNames() STARTED
[2021-04-10T00:22:08.596Z] 
[2021-04-10T00:22:08.596Z] MetricsTest > testWindowsStyleTagNames() PASSED
[2021-04-10T00:22:08.596Z] 
[2021-04-10T00:22:08.596Z] MetricsTest > testBrokerStateMetric() STARTED
[2021-04-10T00:22:09.524Z] 
[2021-04-10T00:22:09.524Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElection() PASSED
[2021-04-10T00:22:09.524Z] 
[2021-04-10T00:22:09.524Z] ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange() STARTED
[2021-04-10T00:22:10.701Z] 
[2021-04-10T00:22:10.701Z] MetricsTest > testBrokerStateMetric() PASSED
[2021-04-10T00:22:10.701Z] 
[2021-04-10T00:22:10.701Z] MetricsTest > testBrokerTopicMetricsBytesInOut() 
STARTED
[2021-04-10T00:22:13.861Z] 
[2021-04-10T00:22:13.861Z] ControllerIntegrationTest > 
testMetadataPropagationOnBrokerChange() PASSED
[2021-04-10T00:22:13.861Z] 
[2021-04-10T00:22:13.861Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 STARTED
[2021-04-10T00:22:15.232Z] 
[2021-04-10T00:22:15.232Z] MetricsTest > testBrokerTopicMetricsBytesInOut() 
PASSED
[2021-04-10T00:22:15.232Z] 
[2021-04-10T00:22:15.232Z] MetricsTest > testJMXFilter() STARTED
[2021-04-10T00:22:16.011Z] 
[2021-04-10T00:22:16.011Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithDisabledExistingFeatureZNode()
 PASSED
[2021-04-10T00:22:16.011Z] 
[2021-04-10T00:22:16.011Z] ControllerIntegrationTest > 
testMetadataPropagationForOfflineReplicas() STARTED
[2021-04-10T00:22:17.338Z] 
[2021-04-10T00:22:17.338Z] MetricsTest > testJMXFilter() PASSED
[2021-04-10T00:22:21.788Z] 
[2021-04-10T00:22:21.788Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 7.0.
[2021-04-10T00:22:21.788Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-04-10T00:22:21.788Z] See 
https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings
[2021-04-10T00:22:21.788Z] 
[2021-04-10T00:22:21.788Z] BUILD SUCCESSFUL in 2h 13m 40s
[2021-04-10T00:22:21.788Z] 178 actionable tasks: 97 executed, 81 up-to-date
[2021-04-10T00:22:21.788Z] 
[2021-04-10T00:22:21.788Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-04-09-22-08-45.html
[2021-04-10T00:22:21.788Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] junit
[2021-04-10T00:22:22.665Z] Recording test results
[2021-04-10T00:22:28.709Z] 
[2021-04-10T00:22:28.709Z] 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #7

2021-04-09 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12648) Experiment with resilient isomorphic topologies

2021-04-09 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12648:
--

 Summary: Experiment with resilient isomorphic topologies
 Key: KAFKA-12648
 URL: https://issues.apache.org/jira/browse/KAFKA-12648
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: A. Sophie Blee-Goldman
Assignee: A. Sophie Blee-Goldman


We're not ready to make this a public feature yet, but I want to start 
experimenting with some ways to make Streams applications more resilient in the 
face of isomorphic topological changes (eg adding/removing/reordering 
subtopologies)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12646) Implement loading snapshot in the controller

2021-04-09 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12646:
--

 Summary: Implement loading snapshot in the controller
 Key: KAFKA-12646
 URL: https://issues.apache.org/jira/browse/KAFKA-12646
 Project: Kafka
  Issue Type: Sub-task
  Components: controller
Reporter: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12647) Implement loading snapshot in the broker

2021-04-09 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12647:
--

 Summary: Implement loading snapshot in the broker
 Key: KAFKA-12647
 URL: https://issues.apache.org/jira/browse/KAFKA-12647
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #6

2021-04-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #24

2021-04-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 3399 lines...]
[2021-04-09T20:58:41.594Z] * Try:
[2021-04-09T20:58:41.594Z] Run with --stacktrace option to get the stack trace. 
Run with --info or --debug option to get more log output. Run with --scan to 
get full insights.
[2021-04-09T20:58:41.594Z] 
[2021-04-09T20:58:41.594Z] * Get more help at https://help.gradle.org
[2021-04-09T20:58:41.594Z] 
[2021-04-09T20:58:41.594Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 7.0.
[2021-04-09T20:58:41.594Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-04-09T20:58:41.594Z] See 
https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings
[2021-04-09T20:58:41.594Z] 
[2021-04-09T20:58:41.594Z] BUILD FAILED in 4m 49s
[2021-04-09T20:58:41.594Z] 182 actionable tasks: 181 executed, 1 up-to-date
[2021-04-09T20:58:41.594Z] 
[2021-04-09T20:58:41.594Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-04-09-20-53-50.html
[2021-04-09T20:58:41.594Z] A fine-grained performance profile is available: use 
the --scan option.
[2021-04-09T20:58:41.876Z] 
[2021-04-09T20:58:41.876Z] FAILURE: Build failed with an exception.
[2021-04-09T20:58:41.876Z] 
[2021-04-09T20:58:41.876Z] * What went wrong:
[2021-04-09T20:58:41.876Z] Execution failed for task ':streams:compileJava'.
[2021-04-09T20:58:41.876Z] > Compilation failed; see the compiler error output 
for details.
[2021-04-09T20:58:41.876Z] 
[2021-04-09T20:58:41.876Z] * Try:
[2021-04-09T20:58:41.876Z] Run with --stacktrace option to get the stack trace. 
Run with --info or --debug option to get more log output. Run with --scan to 
get full insights.
[2021-04-09T20:58:41.876Z] 
[2021-04-09T20:58:41.876Z] * Get more help at https://help.gradle.org
[2021-04-09T20:58:41.876Z] 
[2021-04-09T20:58:41.876Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 7.0.
[2021-04-09T20:58:41.876Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-04-09T20:58:41.876Z] See 
https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings
[2021-04-09T20:58:41.876Z] 
[2021-04-09T20:58:41.876Z] BUILD FAILED in 4m 50s
[2021-04-09T20:58:41.876Z] 182 actionable tasks: 181 executed, 1 up-to-date
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 15 and Scala 2.13
[2021-04-09T20:58:43.405Z] 
[2021-04-09T20:58:43.405Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-04-09-20-53-55.html
[2021-04-09T20:58:43.405Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 8 and Scala 2.12
[2021-04-09T20:58:48.866Z] 
[2021-04-09T20:58:48.866Z] FAILURE: Build failed with an exception.
[2021-04-09T20:58:48.866Z] 
[2021-04-09T20:58:48.866Z] * What went wrong:
[2021-04-09T20:58:48.866Z] Execution failed for task ':streams:compileJava'.
[2021-04-09T20:58:48.866Z] > Compilation failed; see the compiler error output 
for details.
[2021-04-09T20:58:48.866Z] 
[2021-04-09T20:58:48.866Z] * Try:
[2021-04-09T20:58:48.866Z] Run with --stacktrace option to get the stack trace. 
Run with --info or --debug option to get more log output. Run with --scan to 
get full insights.
[2021-04-09T20:58:48.866Z] 
[2021-04-09T20:58:48.866Z] * Get more help at https://help.gradle.org
[2021-04-09T20:58:48.866Z] 
[2021-04-09T20:58:48.866Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 7.0.
[2021-04-09T20:58:48.866Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-04-09T20:58:48.866Z] See 
https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings
[2021-04-09T20:58:48.866Z] 
[2021-04-09T20:58:48.866Z] BUILD FAILED in 5m 1s
[2021-04-09T20:58:48.866Z] 182 actionable tasks: 181 executed, 1 up-to-date
[2021-04-09T20:58:48.866Z] 
[2021-04-09T20:58:48.866Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-04-09-20-53-44.html
[2021-04-09T20:58:48.866Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #23

2021-04-09 Thread Apache Jenkins Server
See 




Re: Subject: [VOTE] 2.8.0 RC1

2021-04-09 Thread Bill Bejeck
Hi John,

Thanks for running the 2.8.0 release!

I've started to validate it and noticed the site-docs haven't been
installed to https://kafka.apache.org/28/documentation.html yet.

Thanks again!

-Bill

On Tue, Apr 6, 2021 at 5:37 PM John Roesler  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka
> 2.8.0. This is a major release that includes many new
> features, including:
>
> * Early-access release of replacing Zookeeper with a self-
> managed quorum
> * Add Describe Cluster API
> * Support mutual TLS authentication on SASL_SSL listeners
> * Ergonomic improvements to Streams TopologyTestDriver
> * Logger API improvement to respect the hierarchy
> * Request/response trace logs are now JSON-formatted
> * New API to add and remove Streams threads while running
> * New REST API to expose Connect task configurations
> * Fixed the TimeWindowDeserializer to be able to deserialize
> keys outside of Streams (such as in the console consumer)
> * Streams resilient improvement: new uncaught exception
> handler
> * Streams resilience improvement: automatically recover from
> transient timeout exceptions
>
>
>
>
> Release notes for the 2.8.0 release:
> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/RELEASE_NOTES.html
>
>
> *** Please download, test and vote by 6 April 2021 ***
>
> Kafka's KEYS file containing PGP keys we use to sign the
> release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
>
> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
>
> https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/javadoc/
>
> * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
>
> https://github.com/apache/kafka/releases/tag/2.8.0-rc1
>
> * Documentation:
> https://kafka.apache.org/28/documentation.html
>
> * Protocol:
> https://kafka.apache.org/28/protocol.html
>
>
> /**
>
> Thanks,
> John
>
>
>
>


Re: [DISCUSS] KIP-718: Make KTable Join on Foreign key unopinionated

2021-04-09 Thread Marco Aurélio Lotz
Hi everyone,

I am fine with sticking with Materialized and adding the info to the
javadoc then - so we keep the footprint smaller.
I will then update the KIP to match what we discussed here and send it for
a vote next week.

I will raise a new bug-fix ticket and change KAFKA-10383
 to become a feature -
if that's ok.

Cheers,
Marco

On Wed, Apr 7, 2021 at 4:15 AM Matthias J. Sax  wrote:

> Just catching up here.
>
> I agree that we have two issue, and the first (align subscription store
> to main store) can be done as a bug-fix.
>
> For the KIP (that addressed the second), I tend to agree that reusing
> `Materialized` might be better as it would keep the API surface area
> smaller.
>
>
> -Matthias
>
> On 4/6/21 8:48 AM, John Roesler wrote:
> > Hi Marco,
> >
> > Just a quick clarification: I just reviewed the Materialized class. It
> looks like the only undesirable members are:
> > 1. Retention
> > 2. Key/Value serdes
> >
> > The underlying store type would be “KeyValueStore” , for
> which case the withRetention javadoc already says it’s ignored.
> >
> > Perhaps we could just stick with Materialized by adding a note to the
> Key/Value serdes setters that they are ignored for FKJoin subscription
> stores?
> >
> > Not as elegant as a new config class, but these config classes actually
> bring a fair amount of complexity, so it might be nice to avoid a new one.
> >
> > Thanks,
> > John
> >
> > On Tue, Apr 6, 2021, at 10:28, Marco Aurélio Lotz wrote:
> >> Hi John / Guozhang,
> >>
> >> If I correctly understood John's message, he agrees on having the two
> >> scenarios (piggy-back and api extension). In my view, these two
> scenarios
> >> are separate tasks - the first one is a bug-fix and the second is an
> >> improvement on the current API.
> >>
> >> - bug-fix: On the current API, we change its implementation to piggy
> back
> >> on the materialization method provided to the materialized parameter.
> This
> >> way it will not be opinionated anymore and will not force RocksDb
> >> persistence for subscription store. Thus an in-memory materialized
> >> parameter would imply an in-memory subscription store, for example.
> From my
> >> understanding, the original implementation tried to be as unopionated
> >> towards storage methods as possible - and the current implementation is
> not
> >> allowing that. Was that the case? We would still need to add this
> >> modification to the update notes, since it may affect some deployments.
> >>
> >> - improvement: We extend the API to allow a user to fine tune different
> >> materialization methods for subscription and join store. This is done by
> >> adding a new parameter to the associated methods.
> >>
> >> Does it sound reasonable to you Guozhang?
> >> On your question, does it make sense for an user to decide retention
> >> policies (withRetention method) or caching for subscription stores? I
> can
> >> see why to finetune Logging for example, but in a first moment not these
> >> other behaviours. That's why I am unsure about using Materialized class.
> >>
> >> @John, I will update the KIP with your points as soon as we clarify
> this.
> >>
> >> Cheers,
> >> Marco
> >>
> >> On Tue, Apr 6, 2021 at 1:17 AM Guozhang Wang 
> wrote:
> >>
> >>> Thanks Marco / John,
> >>>
> >>> I think the arguments for not piggy-backing on the existing
> Materialized
> >>> makes sense; on the other hand, if we go this route should we just use
> a
> >>> separate Materialized than using an extended /
> >>> narrowed-scoped MaterializedSubscription since it seems we want to
> allow
> >>> users to fully customize this store?
> >>>
> >>> Guozhang
> >>>
> >>> On Thu, Apr 1, 2021 at 5:28 PM John Roesler 
> wrote:
> >>>
>  Thanks Marco,
> 
>  Sorry if I caused any trouble!
> 
>  I don’t remember what I was thinking before, but reasoning about it
> now,
>  you might need the fine-grained choice if:
> 
>  1. The number or size of records in each partition of both tables is
>  small(ish), but the cardinality of the join is very high. Then you
> might
>  want an in-memory table store, but an on-disk subscription store.
> 
>  2. The number or size of records is very large, but the join
> cardinality
>  is low. Then you might need an on-disk table store, but an in-memory
>  subscription store.
> 
>  3. You might want a different kind (or differently configured) store
> for
>  the subscription store, since it’s access pattern is so different.
> 
>  If you buy these, it might be good to put the justification into the
> KIP.
> 
>  I’m in favor of the default you’ve proposed.
> 
>  Thanks,
>  John
> 
>  On Mon, Mar 29, 2021, at 04:24, Marco Aurélio Lotz wrote:
> > Hi Guozhang,
> >
> > Apologies for the late answer. Originally that was my proposal - to
> > piggyback on the provided materialisation method (
> > 

[GitHub] [kafka-site] mjsax commented on pull request #338: MINOR quickstart.html remove extra closing paren, fix syntax

2021-04-09 Thread GitBox


mjsax commented on pull request #338:
URL: https://github.com/apache/kafka-site/pull/338#issuecomment-816903340


   Thanks @Alee4738! Merged both PRs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mjsax merged pull request #338: MINOR quickstart.html remove extra closing paren, fix syntax

2021-04-09 Thread GitBox


mjsax merged pull request #338:
URL: https://github.com/apache/kafka-site/pull/338


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-12449) Remove deprecated WindowStore#put

2021-04-09 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12449.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

> Remove deprecated WindowStore#put
> -
>
> Key: KAFKA-12449
> URL: https://issues.apache.org/jira/browse/KAFKA-12449
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 3.0.0
>Reporter: Jorge Esteban Quilcate Otoya
>Assignee: Jorge Esteban Quilcate Otoya
>Priority: Major
> Fix For: 3.0.0
>
>
> Related to KIP-474: 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=115526545] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12645) KIP-731: Record Rate Limiting for Kafka Connect

2021-04-09 Thread Ryanne Dolan (Jira)
Ryanne Dolan created KAFKA-12645:


 Summary: KIP-731: Record Rate Limiting for Kafka Connect
 Key: KAFKA-12645
 URL: https://issues.apache.org/jira/browse/KAFKA-12645
 Project: Kafka
  Issue Type: Improvement
Reporter: Ryanne Dolan
Assignee: Ryanne Dolan


https://cwiki.apache.org/confluence/display/KAFKA/KIP-731%3A+Record+Rate+Limiting+for+Kafka+Connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-726: Make the CooperativeStickyAssignor as the default assignor

2021-04-09 Thread Sophie Blee-Goldman
1) Yes, all of the above will be part of KAFKA-12477 (not KIP-726)

2) No, KAFKA-12638 would be nice to have but I don't think it's appropriate
to remove
the default implementation of #onPartitionsLost in 3.0 since we never gave
any indication
yet that we intend to remove it

3) Yes, this would be similar to when a Consumer drops out of the group.
It's always been
possible for a member to miss a rebalance and have its partition be
reassigned to another
member, during which time both members would claim to own said partition.
But this is safe
because the member who dropped out is blocked from committing offsets on
that partition.

On Fri, Apr 9, 2021 at 2:46 AM Luke Chen  wrote:

> Hi Sophie,
> That sounds great to take care of each case I can think of.
> Questions:
> 1. Do you mean the short-Circuit will also be implemented in KAFKA-12477?
> 2. I don't think KAFKA-12638 is the blocker of this KIP-726, Am I right?
> 3. So, does that mean we still have possibility to have multiple consumer
> owned the same topic partition? And in this situation, we avoid them doing
> committing, and waiting for next rebalance (should be soon). Is my
> understanding correct?
>
> Thank you very much for finding this great solution.
>
> Luke
>
> On Fri, Apr 9, 2021 at 11:37 AM Sophie Blee-Goldman
>  wrote:
>
> > Alright, here's the detailed proposal for KAFKA-12477. This assumes we
> will
> > change the default assignor to ["cooperative-sticky", "range"] in
> KIP-726.
> > It also acknowledges that users may attempt any kind of upgrade without
> > reading the docs, and so we need to put in safeguards against data
> > corruption rather than assume everyone will follow the safe upgrade path.
> >
> > With this proposal,
> > 1) New applications on 3.0 will enable cooperative rebalancing by default
> > 2) Existing applications which don’t set an assignor can safely upgrade
> to
> > 3.0 using a single rolling bounce with no extra steps, and will
> > automatically transition to cooperative rebalancing
> > 3) Existing applications which do set an assignor that uses EAGER can
> > likewise upgrade their applications to COOPERATIVE with a single rolling
> > bounce
> > 4) Once on 3.0, applications can safely go back and forth between EAGER
> and
> > COOPERATIVE
> > 5) Applications can safely downgrade away from 3.0
> >
> > The high-level idea for dynamic protocol upgrades is that the group will
> > leverage the assignor selected by the group coordinator to determine when
> > it’s safe to upgrade to COOPERATIVE, and trigger a fail-safe to protect
> the
> > group in case of rare events or user misconfiguration. The group
> > coordinator selects the most preferred assignor that’s supported by all
> > members of the group, so we know that all members will support
> COOPERATIVE
> > once we receive the “cooperative-sticky” assignor after a rebalance. At
> > this point, each member can upgrade their own protocol to COOPERATIVE.
> > However, there may be situations in which an EAGER member may join the
> > group even after upgrading to COOPERATIVE. For example, during a rolling
> > upgrade if the last remaining member on the old bytecode misses a
> > rebalance, the other members will be allowed to upgrade to COOPERATIVE.
> If
> > the old member rejoins and is chosen to be the group leader before it’s
> > upgraded to 3.0, it won’t be aware that the other members of the group
> have
> > not yet revoked their partitions when computing the assignment.
> >
> > Short Circuit:
> > The risk of mixing the cooperative and eager rebalancing protocols is
> that
> > a partition may be assigned to one member while it has yet to be revoked
> > from its previous owner. The danger is that the new owner may begin
> > processing and committing offsets for this partition while the previous
> > owner is also committing offsets in its #onPartitionsRevoked callback,
> > which is invoked at the end of the rebalance in the cooperative protocol.
> > This can result in these consumers overwriting each other’s offsets and
> > getting a corrupted view of the partition. Note that it’s not possible to
> > commit during a rebalance, so we can protect against offset corruption by
> > blocking further commits after we detect that the group leader may not
> > understand COOPERATIVE, but before we invoke #onPartitionsRevoked. This
> is
> > the “short-circuit” — if we detect that the group is in an unsafe state,
> we
> > invoke #onPartitionsLost instead of #onPartitionsRevoked and explicitly
> > prevent offsets from being committed on those revoked partitions.
> >
> > Consumer procedure:
> > Upon startup, the consumer will initially select the highest
> > commonly-supported protocol across its configured assignors. With
> > ["cooperative-sticky", "range”], the initial protocol will be EAGER when
> > the member first joins the group. Following a rebalance, each member will
> > check the selected assignor. If the chosen assignor supports COOPERATIVE,
> > the member can upgrade their 

[DISCUSS] KIP-731: Record Rate Limiting for Kafka Connect

2021-04-09 Thread Ryanne Dolan
Hey y'all, I'd like to draw you attention to a new KIP:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-731%3A+Record+Rate+Limiting+for+Kafka+Connect

Lemme know what you think. Thanks!

Ryanne


[jira] [Created] (KAFKA-12644) Add Missing Class-Level Javadoc to Decendants of org.apache.kafka.common.errors.ApiException

2021-04-09 Thread Israel Ekpo (Jira)
Israel Ekpo created KAFKA-12644:
---

 Summary: Add Missing Class-Level Javadoc to Decendants of 
org.apache.kafka.common.errors.ApiException
 Key: KAFKA-12644
 URL: https://issues.apache.org/jira/browse/KAFKA-12644
 Project: Kafka
  Issue Type: Improvement
  Components: clients, documentation
Affects Versions: 3.0.0, 2.8.1
Reporter: Israel Ekpo
Assignee: Israel Ekpo
 Fix For: 3.0.0, 2.8.1


I noticed that class-level Javadocs are missing from some classes in the 
org.apache.kafka.common.errors package. This issue is for tracking the work of 
adding the missing class-level javadocs for those Exception classes.

https://kafka.apache.org/27/javadoc/org/apache/kafka/common/errors/package-summary.html

https://github.com/apache/kafka/tree/trunk/clients/src/main/java/org/apache/kafka/common/errors

Basic class-level documentation could be derived by mapping the error 
conditions documented in the protocol

https://kafka.apache.org/protocol#protocol_constants



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #22

2021-04-09 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-12591) Remove deprecated `quota.producer.default` and `quota.consumer.default` configurations

2021-04-09 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-12591.
-
Fix Version/s: 3.0.0
 Reviewer: Rajini Sivaram
   Resolution: Fixed

> Remove deprecated `quota.producer.default` and `quota.consumer.default` 
> configurations
> --
>
> Key: KAFKA-12591
> URL: https://issues.apache.org/jira/browse/KAFKA-12591
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.0.0
>
>
> `quota.producer.default` and `quota.consumer.default` were deprecated in AK 
> 0.11.0.0. I propose to remove them in AK 3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12643) Kafka Streams 2.7 with Kafka Broker 2.6.x regression: bad timestamp in transform/process (this.context.schedule function)

2021-04-09 Thread David EVANO (Jira)
David EVANO created KAFKA-12643:
---

 Summary: Kafka Streams 2.7 with Kafka Broker 2.6.x regression: bad 
timestamp in transform/process (this.context.schedule function)
 Key: KAFKA-12643
 URL: https://issues.apache.org/jira/browse/KAFKA-12643
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.7.0
Reporter: David EVANO
 Attachments: Capture d’écran 2021-04-09 à 17.50.05.png

During a tranform() or a process() method:

Define a schedule tyask:

this.context.schedule(Duration.ofSeconds(1), PunctuationType.WALL_CLOCK_TIME, 
timestamp -> \{...}

store.put(...) or context.forward(...) produce a record with an invalid 
timestamp.

For the forward, a workaround is define the timestamp:

context.forward(entry.key, entry.value.toString(), 
To.all().withTimestamp(timestamp));

But for state.put(...) or state.delete(...) functions there is no workaround.

Is it mandatory to have the Kafka broker version aligned with the Kafka Streams 
version?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12642) Improve Rebalance reason upon metadata change

2021-04-09 Thread Nicolas Guyomar (Jira)
Nicolas Guyomar created KAFKA-12642:
---

 Summary: Improve Rebalance reason upon metadata change
 Key: KAFKA-12642
 URL: https://issues.apache.org/jira/browse/KAFKA-12642
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Nicolas Guyomar


Whenever the known member metadata does not match anymore the one from a 
JoinGroupRequest, the GroupCoordinator triggers a rebalance with the following 
reason  "Updating metadata for member ${member.memberId}"  but  there 2 
underlying reasons from that part of the code in MemberMetadata.scala : 
{code:java}
def matches(protocols: List[(String, Array[Byte])]): Boolean = {
  if (protocols.size != this.supportedProtocols.size)
return false

  for (i <- protocols.indices) {
val p1 = protocols(i)
val p2 = supportedProtocols(i)
if (p1._1 != p2._1 || !util.Arrays.equals(p1._2, p2._2))
  return false
  }
  true
}{code}
Could we improve the Rebalance Reason with a bit more detail maybe ? 

 

Thank you

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.7.1 RC2

2021-04-09 Thread Mickael Maison
Hi,

Here is a successful build of 2.7.1 RC2:
https://ci-builds.apache.org/job/Kafka/job/kafka-2.7-jdk8/144/

Thanks

On Thu, Apr 8, 2021 at 6:27 PM Mickael Maison  wrote:
>
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 2.7.1.
>
> Since 2.7.1 RC1, the following JIRAs have been fixed: KAFKA-12593,
> KAFKA-12474, KAFKA-12602.
>
> Release notes for the 2.7.1 release:
> https://home.apache.org/~mimaison/kafka-2.7.1-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, April 16, 5pm BST
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~mimaison/kafka-2.7.1-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~mimaison/kafka-2.7.1-rc2/javadoc/
>
> * Tag to be voted upon (off 2.7 branch) is the 2.7.1 tag:
> https://github.com/apache/kafka/releases/tag/2.7.1-rc2
>
> * Documentation:
> https://kafka.apache.org/27/documentation.html
>
> * Protocol:
> https://kafka.apache.org/27/protocol.html
>
> * Successful Jenkins builds for the 2.7 branch:
> The build is still running, I'll update the thread once it's complete
>
> /**
>
> Thanks,
> Mickael


Re: [DISCUSS] KIP-726: Make the CooperativeStickyAssignor as the default assignor

2021-04-09 Thread Luke Chen
Hi Sophie,
That sounds great to take care of each case I can think of.
Questions:
1. Do you mean the short-Circuit will also be implemented in KAFKA-12477?
2. I don't think KAFKA-12638 is the blocker of this KIP-726, Am I right?
3. So, does that mean we still have possibility to have multiple consumer
owned the same topic partition? And in this situation, we avoid them doing
committing, and waiting for next rebalance (should be soon). Is my
understanding correct?

Thank you very much for finding this great solution.

Luke

On Fri, Apr 9, 2021 at 11:37 AM Sophie Blee-Goldman
 wrote:

> Alright, here's the detailed proposal for KAFKA-12477. This assumes we will
> change the default assignor to ["cooperative-sticky", "range"] in KIP-726.
> It also acknowledges that users may attempt any kind of upgrade without
> reading the docs, and so we need to put in safeguards against data
> corruption rather than assume everyone will follow the safe upgrade path.
>
> With this proposal,
> 1) New applications on 3.0 will enable cooperative rebalancing by default
> 2) Existing applications which don’t set an assignor can safely upgrade to
> 3.0 using a single rolling bounce with no extra steps, and will
> automatically transition to cooperative rebalancing
> 3) Existing applications which do set an assignor that uses EAGER can
> likewise upgrade their applications to COOPERATIVE with a single rolling
> bounce
> 4) Once on 3.0, applications can safely go back and forth between EAGER and
> COOPERATIVE
> 5) Applications can safely downgrade away from 3.0
>
> The high-level idea for dynamic protocol upgrades is that the group will
> leverage the assignor selected by the group coordinator to determine when
> it’s safe to upgrade to COOPERATIVE, and trigger a fail-safe to protect the
> group in case of rare events or user misconfiguration. The group
> coordinator selects the most preferred assignor that’s supported by all
> members of the group, so we know that all members will support COOPERATIVE
> once we receive the “cooperative-sticky” assignor after a rebalance. At
> this point, each member can upgrade their own protocol to COOPERATIVE.
> However, there may be situations in which an EAGER member may join the
> group even after upgrading to COOPERATIVE. For example, during a rolling
> upgrade if the last remaining member on the old bytecode misses a
> rebalance, the other members will be allowed to upgrade to COOPERATIVE. If
> the old member rejoins and is chosen to be the group leader before it’s
> upgraded to 3.0, it won’t be aware that the other members of the group have
> not yet revoked their partitions when computing the assignment.
>
> Short Circuit:
> The risk of mixing the cooperative and eager rebalancing protocols is that
> a partition may be assigned to one member while it has yet to be revoked
> from its previous owner. The danger is that the new owner may begin
> processing and committing offsets for this partition while the previous
> owner is also committing offsets in its #onPartitionsRevoked callback,
> which is invoked at the end of the rebalance in the cooperative protocol.
> This can result in these consumers overwriting each other’s offsets and
> getting a corrupted view of the partition. Note that it’s not possible to
> commit during a rebalance, so we can protect against offset corruption by
> blocking further commits after we detect that the group leader may not
> understand COOPERATIVE, but before we invoke #onPartitionsRevoked. This is
> the “short-circuit” — if we detect that the group is in an unsafe state, we
> invoke #onPartitionsLost instead of #onPartitionsRevoked and explicitly
> prevent offsets from being committed on those revoked partitions.
>
> Consumer procedure:
> Upon startup, the consumer will initially select the highest
> commonly-supported protocol across its configured assignors. With
> ["cooperative-sticky", "range”], the initial protocol will be EAGER when
> the member first joins the group. Following a rebalance, each member will
> check the selected assignor. If the chosen assignor supports COOPERATIVE,
> the member can upgrade their used protocol to COOPERATIVE and no further
> action is required. If the member is already on COOPERATIVE but the
> selected assignor does NOT support it, then we need to trigger the
> short-circuit. In this case we will invoke #onPartitionsLost instead of
> #onPartitionsRevoked, and set a flag to block any attempts at committing
> those partitions which have been revoked. If a commit is attempted, as may
> be the case if the user does not implement #onPartitionsLost (see
> KAFKA-12638), we will throw a CommitFailedException which will be bubbled
> up through poll() after completing the rebalance. The member will then
> downgrade its protocol to EAGER for the next rebalance.
>
> Let me know what you think,
> Sophie
>
> On Fri, Apr 2, 2021 at 7:08 PM Luke Chen  wrote:
>
> > Hi Sophie,
> > Making the default to "cooperative-sticky, range" is a 

[jira] [Created] (KAFKA-12641) Clear RemoteLogLeaderEpochState entry when it become empty.

2021-04-09 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-12641:
--

 Summary: Clear RemoteLogLeaderEpochState entry when it become 
empty. 
 Key: KAFKA-12641
 URL: https://issues.apache.org/jira/browse/KAFKA-12641
 Project: Kafka
  Issue Type: Sub-task
Reporter: Satish Duggana
Assignee: Satish Duggana


https://github.com/apache/kafka/pull/10218#discussion_r609895193



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12640) AbstractCoordinator ignores backoff timeout when joining the consumer group

2021-04-09 Thread Matiss Gutmanis (Jira)
Matiss Gutmanis created KAFKA-12640:
---

 Summary: AbstractCoordinator ignores backoff timeout when joining 
the consumer group
 Key: KAFKA-12640
 URL: https://issues.apache.org/jira/browse/KAFKA-12640
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.7.0
Reporter: Matiss Gutmanis


We observed heavy logging while trying to join consumer group during partial 
unavailability of Kafka cluster (it's part of our testing process). Seems that 
{{rebalanceConfig.retryBackoffMs}} used in  {{ 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator#joinGroupIfNeeded}}
 is not respected. Debugging revealed that {{Timer}} instance technically is 
expired thus using sleep of 0 milliseconds which defeats the purpose of backoff 
timeout.

Minimal backoff timeout should be respected.
{code:java}
2021-03-30 08:30:24,488 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,488 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,488 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,489 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,489 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,489 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,490 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,490 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,490 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,491 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,491 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,491 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,492 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,492 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,492 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12639) AbstractCoordinator ignores backoff timeout when joining the consumer group

2021-04-09 Thread Matiss Gutmanis (Jira)
Matiss Gutmanis created KAFKA-12639:
---

 Summary: AbstractCoordinator ignores backoff timeout when joining 
the consumer group
 Key: KAFKA-12639
 URL: https://issues.apache.org/jira/browse/KAFKA-12639
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.7.0
Reporter: Matiss Gutmanis


We observed heavy logging while trying to join consumer group during partial 
unavailability of Kafka cluster (it's part of our testing process). Seems that 
{{rebalanceConfig.retryBackoffMs}} used in  {{ 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator#joinGroupIfNeeded}}
 is not respected. Debugging revealed that {{Timer}} instance technically is 
expired thus using sleep of 0 milliseconds which defeats the purpose of backoff 
timeout.

Minimal backoff timeout should be respected.

 
{code:java}
2021-03-30 08:30:24,488 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,488 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,488 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,489 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,489 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,489 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,490 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,490 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,490 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,491 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,491 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,491 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group
2021-03-30 08:30:24,492 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] JoinGroup failed: Coordinator 
127.0.0.1:9092 (id: 2147483634 rack: null) is loading the group.
2021-03-30 08:30:24,492 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] Rebalance failed.
org.apache.kafka.common.errors.CoordinatorLoadInProgressException: The 
coordinator is loading and hence can't process requests.
2021-03-30 08:30:24,492 INFO 
[fs2-kafka-consumer-41][o.a.k.c.c.i.AbstractCoordinator] [Consumer 
clientId=app_clientid, groupId=consumer-group] (Re-)joining group

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #21

2021-04-09 Thread Apache Jenkins Server
See