[jira] [Resolved] (KAFKA-10340) Source connectors should report error when trying to produce records to non-existent topics instead of hanging forever

2021-03-18 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10340.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Source connectors should report error when trying to produce records to 
> non-existent topics instead of hanging forever
> --
>
> Key: KAFKA-10340
> URL: https://issues.apache.org/jira/browse/KAFKA-10340
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.1, 2.7.0, 2.6.1, 2.8.0
>Reporter: Arjun Satish
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.0.0, 2.7.1, 2.6.2
>
>
> Currently, a source connector will blindly attempt to write a record to a 
> Kafka topic. When the topic does not exist, its creation is controlled by the 
> {{auto.create.topics.enable}} config on the brokers. When auto.create is 
> disabled, the producer.send() call on the Connect worker will hang 
> indefinitely (due to the "infinite retries" configuration for said producer). 
> In setups where this config is usually disabled, the source connector simply 
> appears to hang and not produce any output.
> It is desirable to either log an info or an error message (or inform the user 
> somehow) that the connector is simply stuck waiting for the destination topic 
> to be created. When the worker has permissions to inspect the broker 
> settings, it can use the {{listTopics}} and {{describeConfigs}} API in 
> AdminClient to check if the topic exists, the broker can 
> {{auto.create.topics.enable}} topics, and if these cases do not exist, either 
> throw an error.
> With the recently merged 
> [KIP-158|https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics],
>  this becomes even more specific a corner case: when topic creation settings 
> are enabled, the worker should handle the corner case where topic creation is 
> disabled, {{auto.create.topics.enable}} is set to false and topic does not 
> exist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-721: Enable connector log contexts in Connect Log4j configuration

2021-03-18 Thread Randall Hauch
Thanks, Dongjin.

We still have some time before this KIP might be approved, so I don’t want
to block any work on KIP-653. It’s fine for the KIP-653 PR to be merged
first. I’ll just have to update my PR when this KIP passes.

On Thu, Mar 18, 2021 at 1:30 AM Dongjin Lee  wrote:

> Hi Randall,
>
> I am +1 for this proposal. Sure, changing this setting manually is so
> annoying. I think this proposal should be applied as soon as possible.
>
> However, I have a question: as you already know, the upgrade to Log4j2[^1]
> was already passed but not merged yet. Which one would be the right working
> path? Merge KIP-653 first and KIP-721 later, or merge KIP-721 first and
> apply the change into KIP-653?
>
> Thanks,
> Dongjin
>
> [^1]:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
>
> On Wed, Mar 17, 2021 at 6:31 AM Randall Hauch  wrote:
>
> > Hello all,
> >
> > I'd like to propose KIP-721 to change Connect's Log4J configuration that
> we
> > ship with AK. This KIP will enable by default Connect's valuable
> connector
> > log contexts, which was added as part of KIP-449 to include connector-
> and
> > task-specific information to every log message output by the connector,
> its
> > tasks, or the worker thread operating those components.
> >
> > The details are here:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-721%3A+Enable+connector+log+contexts+in+Connect+Log4j+configuration
> >
> > The earlier KIP-449 (approved and implemented in AK 2.3.0) is here:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs
> >
> > I look forward to your feedback!
> >
> > Best regards,
> >
> > Randall
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  <http://goog_969573159/>github.com/dongjinleekr
> <https://github.com/dongjinleekr>keybase: https://keybase.io/dongjinleekr
> <https://keybase.io/dongjinleekr>linkedin: kr.linkedin.com/in/dongjinleekr
> <https://kr.linkedin.com/in/dongjinleekr>speakerdeck:
> speakerdeck.com/dongjin
> <https://speakerdeck.com/dongjin>*
>


[DISCUSS] KIP-722: Enable connector client overrides by default

2021-03-16 Thread Randall Hauch
Hello all,

I'd like to propose KIP-722 to change the default value of the existing
`connector.client.config.override.policy` Connect worker configuration, so
that by default connectors can override client properties.The details are
here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-722%3A+Enable+connector+client+overrides+by+default

The feature and worker config property were originally added by KIP-458
(approved and implemented in AK 2.3.0):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy

I look forward to your feedback!

Best regards,

Randall


[DISCUSS] KIP-721: Enable connector log contexts in Connect Log4j configuration

2021-03-16 Thread Randall Hauch
Hello all,

I'd like to propose KIP-721 to change Connect's Log4J configuration that we
ship with AK. This KIP will enable by default Connect's valuable connector
log contexts, which was added as part of KIP-449 to include connector- and
task-specific information to every log message output by the connector, its
tasks, or the worker thread operating those components.

The details are here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-721%3A+Enable+connector+log+contexts+in+Connect+Log4j+configuration

The earlier KIP-449 (approved and implemented in AK 2.3.0) is here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs

I look forward to your feedback!

Best regards,

Randall


Proposed breaking changes for Connect in AK 3.0.0

2021-03-16 Thread Randall Hauch
The next release of AK will be 3.0.0. Since this is a major release, we
have an opportunity to:

   - remove previously deprecated worker configuration properties; and
   - change some of Connect's defaults that were chosen previously to
   maintain backward compatibility, but for which there are more sensible
   defaults.

I've taken the liberty of creating a wiki page [1] that lists all of the
Connect-related KIPs since AK 0.10.0.0 (the release after Connect was
introduced), and identifies a small set of changes that are appropriate
only for major releases.

This page is not a KIP, but will hopefully help us identify any behaviors
or APIs that we may wish to change in AK 3.0.0. Note that some changes have
been already approved, and we need to decide whether to make those changes
in AK 3.0.0. Other changes will still require a formal KIP with discussion
and approval.

Please use this thread to discuss these or other proposed changes.

Thanks,

Randall

[1]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047362


[jira] [Created] (KAFKA-12484) Enable Connect's connector log contexts by default

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12484:
-

 Summary: Enable Connect's connector log contexts by default
 Key: KAFKA-12484
 URL: https://issues.apache.org/jira/browse/KAFKA-12484
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


Connect's Log4J configuration does not by default log the connector contexts. 
That feature was added in 
[KIP-449|https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs]
 and first appeared in AK 2.3.0, but it was not enabled by default since that 
would not have been backward compatible.

But with AK 3.0.0, we have the opportunity to change the default in 
{{config/connect-log4j.properties}} to enable connector log contexts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12483) Enable client overrides in connector configs by default

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12483:
-

 Summary: Enable client overrides in connector configs by default
 Key: KAFKA-12483
 URL: https://issues.apache.org/jira/browse/KAFKA-12483
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


Connector-specific client overrides were added in 
[KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy],
 but that feature is not enabled by default since it would not have been 
backward compatible.

But with AK 3.0.0, we have the opportunity to enable connector client overrides 
by default by changing the worker config's 
{{connector.client.config.override.policy}} default value to \{{All}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12482) Remove deprecated Connect worker configs

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12482:
-

 Summary: Remove deprecated Connect worker configs
 Key: KAFKA-12482
 URL: https://issues.apache.org/jira/browse/KAFKA-12482
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


The following Connect worker configuration properties were deprecated and 
should be removed in 3.0.0:
 * {{rest.host.name}} (deprecated in 
[KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])

 * {{rest.port}} (deprecated in 
[KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])
 * {{internal.key.converter}} (deprecated in 
[KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])
 * {{internal.value.converter}} (deprecated in 
[KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])
 * sd
 *



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12380) Executor in Connect's Worker is not shut down when the worker is

2021-02-26 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12380:
-

 Summary: Executor in Connect's Worker is not shut down when the 
worker is
 Key: KAFKA-12380
 URL: https://issues.apache.org/jira/browse/KAFKA-12380
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Randall Hauch


The `Worker` class has an [`executor` 
field|https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L100]
 that the public constructor initializes with a new cached thread pool 
([https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L127|https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L127].]).

When the worker is stopped, it does not shutdown this executor. This is 
normally okay in the Connect runtime and MirrorMaker 2 runtimes, because the 
worker is stopped only when the JVM is stopped (via the shutdown hook in the 
herders).

However, we instantiate and stop the herder many times in our integration 
tests, and this means we're not necessarily shutting down the herder's 
executor. Normally this won't hurt, as long as all of the runnables that the 
executor threads run actually do terminate. But it's possible those threads 
*might* not terminate in all tests. TBH, I don't know that such cases actually 
exist.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12339) Add retry to admin client's listOffsets

2021-02-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12339.
---
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk`, and cherry-picked to:
 * `2.8` for inclusion in 2.8.0 (with release manager approval)
 * `2.7` for inclusion in 2.7.1
 * `2.6` for inclusion in 2.6.2 (with release manager approval)
 * `2.5` for inclusion in 2.5.2

> Add retry to admin client's listOffsets
> ---
>
> Key: KAFKA-12339
> URL: https://issues.apache.org/jira/browse/KAFKA-12339
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> After upgrading our connector env to 2.9.0-SNAPSHOT, sometimes the connect 
> cluster encounters following error.
> {quote}Uncaught exception in herder work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:324)
> org.apache.kafka.connect.errors.ConnectException: Error while getting end 
> offsets for topic 'connect-storage-topic-connect-cluster-1'
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:689)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
> at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
> at 
> org.apache.kafka.connect.storage.KafkaStatusBackingStore.start(KafkaStatusBackingStore.java:216)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:129)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
> ... 10 more
> {quote}
> [https://github.com/apache/kafka/pull/9780] added shared admin to get end 
> offsets. KafkaAdmin#listOffsets does not handle topic-level error, hence the 
> UnknownTopicOrPartitionException on topic-level can obstruct worker from 
> running when the new internal topic is NOT synced to all brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12340) Recent change to use SharedTopicAdmin results in potential resource leak in deprecated backing store constructors

2021-02-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12340.
---
Resolution: Fixed

Merged to `trunk`, and cherry-picked to:
 * `2.8` for inclusion in 2.8.0 (with release manager approval)
 * `2.7` for inclusion in 2.7.1
 * `2.6` for inclusion in 2.6.2 (with release manager approval)
 * `2.5` for inclusion in 2.5.2

> Recent change to use SharedTopicAdmin results in potential resource leak in 
> deprecated backing store constructors
> -
>
> Key: KAFKA-12340
> URL: https://issues.apache.org/jira/browse/KAFKA-12340
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>    Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> When KAFKA-10021 modified the Connect `Kafka*BackingStore` classes, we 
> deprecated the old constructors and changed all uses within AK to use the new 
> constructors that take a `Supplier`.
> If the old deprecated constructors are used (outside of AK), then they will 
> not close the Admin clients that are created by the "default" supplier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-02-19 Thread Randall Hauch
There are two regressions for a bugfix in Connect previously merged in 2.8:

   - https://issues.apache.org/jira/browse/KAFKA-12339 -- fix resource leak
   in Connect's Kafka*BackingStore implementations when used outside of Connect
   - https://issues.apache.org/jira/browse/KAFKA-12340 -- add retry to
   admin client's listOffset for InvalidMetadataExceptions, allowing Connect
   to start if its newly-created topics are not synced to all brokers

The PRs for these have already been approved and merged to the `trunk`
branch (and backported to a few older branches). I'd like to backport these
to the `2.8` branch.

Thanks, and best regards!

Randall

On Thu, Feb 18, 2021 at 10:23 AM John Roesler  wrote:

> Hello again, all.
>
> This is a notice that we are now in Code Freeze for the 2.8 branch.
>
> From now until the release, only fixes for blockers should be merged to
> the release branch. Fixes for failing tests are allowed and encouraged.
> Documentation-only commits are also ok, in case you have forgotten to
> update the docs for some features in 2.8.0.
>
> Once we have a green build and passing system tests, I will cut the first
> RC.
>
> Thank you,
> John
>
> On Sun, Feb 7, 2021, at 09:59, John Roesler wrote:
> > Hello all,
> >
> > I have just cut the branch for 2.8 and sent the notification
> > email to the dev mailing list.
> >
> > As a reminder, the next checkpoint toward the 2.8.0 release
> > is Code Freeze on Feb 17th.
> >
> > To ensure a high-quality release, we should now focus our
> > efforts on stabilizing the 2.8 branch, including resolving
> > failures, writing new tests, and fixing documentation.
> >
> > Thanks as always for your contributions,
> > John
> >
> >
> > On Wed, 2021-02-03 at 14:18 -0600, John Roesler wrote:
> > > Hello again, all,
> > >
> > > This is a reminder that today is the Feature Freeze
> > > deadline. To avoid any last-minute crunch or time-zone
> > > unfairness, I'll cut the branch toward the end of the week.
> > >
> > > Please wrap up your features and transition fully into a
> > > stabilization mode. The next checkpoint is Code Freeze on
> > > Feb 17th.
> > >
> > > Thanks as always for all of your contributions,
> > > John
> > >
> > > On Wed, 2021-01-27 at 12:17 -0600, John Roesler wrote:
> > > > Hello again, all.
> > > >
> > > > This is a reminder that *today* is the KIP freeze for Apache
> > > > Kafka 2.8.0.
> > > >
> > > > The next checkpoint is the Feature Freeze on Feb 3rd.
> > > >
> > > > When considering any last-minute KIPs today, please be
> > > > mindful of the scope, since we have only one week to merge a
> > > > stable implementation of the KIP.
> > > >
> > > > For those whose KIPs have been accepted already, please work
> > > > closely with your reviewers so that your features can be
> > > > merged in a stable form in before the Feb 3rd cutoff. Also,
> > > > don't forget to update the documentation as part of your
> > > > feature.
> > > >
> > > > Finally, as a gentle reminder to all contributors. There
> > > > seems to have been a recent increase in test and system test
> > > > failures. Please take some time starting now to stabilize
> > > > the codebase so we can ensure a high quality and timely
> > > > 2.8.0 release!
> > > >
> > > > Thanks to all of you for your contributions,
> > > > John
> > > >
> > > > On Sat, 2021-01-23 at 18:15 +0300, Ivan Ponomarev wrote:
> > > > > Hi John,
> > > > >
> > > > > KIP-418 is already implemented and reviewed, but I don't see it in
> the
> > > > > release plan. Can it be added?
> > > > >
> > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-418%3A+A+method-chaining+way+to+branch+KStream
> > > > >
> > > > > Regards,
> > > > >
> > > > > Ivan
> > > > >
> > > > > 22.01.2021 21:49, John Roesler пишет:
> > > > > > Sure thing, Leah!
> > > > > > -John
> > > > > > On Thu, Jan 21, 2021, at 07:54, Leah Thomas wrote:
> > > > > > > Hi John,
> > > > > > >
> > > > > > > KIP-659 was just accepted as well, can it be added to the
> release plan?
> > > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-659%3A+Improve+TimeWindowedDeserializer+and+TimeWindowedSerde+to+handle+window+size
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Leah
> > > > > > >
> > > > > > > On Thu, Jan 14, 2021 at 9:36 AM John Roesler <
> vvcep...@apache.org> wrote:
> > > > > > >
> > > > > > > > Hi David,
> > > > > > > >
> > > > > > > > Thanks for the heads-up; it's added.
> > > > > > > >
> > > > > > > > -John
> > > > > > > >
> > > > > > > > On Thu, 2021-01-14 at 08:43 +0100, David Jacot wrote:
> > > > > > > > > Hi John,
> > > > > > > > >
> > > > > > > > > KIP-700 just got accepted. Can we add it to the release
> plan?
> > > > > > > > >
> > > > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-700%3A+Add+Describe+Cluster+API
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > David
> > > > > > > > >
> > > > > > > > > On Wed, Jan 13, 2021 at 11:22 PM John Roesler <
> vvcep...@apache.org>
> > > > > > > > wrote:
> > 

Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-19 Thread Randall Hauch
Thanks, Sophie. Commits for all 3 issues mentioned earlier have been
backported to the `2.6` branch.

On Fri, Feb 19, 2021 at 2:08 PM Sophie Blee-Goldman 
wrote:

> Thanks Randall. I think all three of those seem important and should be
> merged back to the 2.6 branch.
>
> On Fri, Feb 19, 2021 at 10:01 AM Randall Hauch  wrote:
>
> > Hi, Sophie.
> >
> > I have three new blockers related to a previous fix applied since 2.6.1
> > that I'd like to merge to the 2.6 branch:
> > * https://issues.apache.org/jira/browse/KAFKA-12340 -- potential
> resource
> > leak for Kafka*BackingStores used in Connect; PR is already approved and
> in
> > trunk
> > * https://issues.apache.org/jira/browse/KAFKA-12339 -- may cause Connect
> > worker to fail upon creation of internal topic (race condition); PR will
> > likely be approved shortly
> > * https://issues.apache.org/jira/browse/KAFKA-12343 -- Connect fails to
> > read internal topics on AK 0.10.x; PR will likely be approved shortly
> >
> > Please let me know if it's okay to merge these to the 2.6 branch for
> > inclusion in 2.6.2. Thanks!
> >
> > Randall
> >
> > On Fri, Feb 12, 2021 at 11:35 PM Randall Hauch  wrote:
> >
> > > This fix is merged and backported. Thanks!
> > >
> > > Best regards,
> > >
> > > Randall
> > >
> > > On Fri, Feb 12, 2021 at 12:22 PM Randall Hauch 
> wrote:
> > >
> > >> Hi, Sophie:
> > >>
> > >> David J. has found a regression in MirrorMaker 2 that prevents the MM2
> > >> executable from starting:
> > >> https://issues.apache.org/jira/browse/KAFKA-12326. This was caused
> by a
> > >> recent fix of mine (https://issues.apache.org/jira/browse/KAFKA-10021
> )
> > >> and is a serious regression limited to the MirrorMaker 2 executable. I
> > have
> > >> a one-line PR to fix the regression (
> > >> https://github.com/apache/kafka/pull/10122) and have verified it
> > >> corrects the MM2 executable. Once the PR is approved and with your
> > >> approval, I can cherry-pick to the `2.6` branch.
> > >>
> > >> Best regards,
> > >>
> > >> Randall
> > >>
> > >> On Wed, Feb 10, 2021 at 12:43 PM Sophie Blee-Goldman <
> > sop...@confluent.io>
> > >> wrote:
> > >>
> > >>> Ok, thanks for the update!
> > >>>
> > >>> On Wed, Feb 10, 2021 at 1:06 AM Luke Chen  wrote:
> > >>>
> > >>> > Hi Ismael & Sophie,
> > >>> > Sorry, after some investigation, I think this might not be a
> defect.
> > To
> > >>> > work with the project with a specific scala version, user might
> need
> > >>> to use
> > >>> > the same version as the project used. This issue also happened on
> > other
> > >>> > projects using scala, ex: Spark. ref:
> > >>> https://stackoverflow.com/a/61677956
> > >>> > .
> > >>> >
> > >>> > So, you can continue to cut the rc.
> > >>> >
> > >>> > Thank you very much.
> > >>> > Luke
> > >>> >
> > >>> > On Wed, Feb 10, 2021 at 11:19 AM Luke Chen 
> > wrote:
> > >>> >
> > >>> > > I just saw the defect KAFKA-12312
> > >>> > > <https://issues.apache.org/jira/browse/KAFKA-12312>, so I
> brought
> > >>> it to
> > >>> > > your attention.
> > >>> > > Do you think it's not a compatibility issue? If not, I think we
> > don't
> > >>> > need
> > >>> > > to cherry-pick the fix.
> > >>> > >
> > >>> > > Thanks.
> > >>> > > Luke
> > >>> > >
> > >>> > > On Wed, Feb 10, 2021 at 11:16 AM Ismael Juma 
> > >>> wrote:
> > >>> > >
> > >>> > >> It's a perf improvement, there was no regression. I think Luke
> > >>> needs to
> > >>> > be
> > >>> > >> clearer how this impacts users. Luke, are you referring to cases
> > >>> where
> > >>> > >> someone runs the broker in an embedded scenario (eg tests)?
> > >>> > >>
> > >>> > >> Ismael
> > >>> > >>
> > >>&

[jira] [Resolved] (KAFKA-12343) Recent change to use SharedTopicAdmin in KakfkaBasedLog fails with AK 0.10.x brokers

2021-02-19 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12343.
---
  Reviewer: Konstantine Karantasis
Resolution: Fixed

> Recent change to use SharedTopicAdmin in KakfkaBasedLog fails with AK 0.10.x 
> brokers
> 
>
> Key: KAFKA-12343
> URL: https://issues.apache.org/jira/browse/KAFKA-12343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>    Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> System test failure 
> ([sample|http://confluent-kafka-2-7-system-test-results.s3-us-west-2.amazonaws.com/2021-02-18--001.1613655226--confluentinc--2.7--54952635e5/report.html]):
> {code:java}
> Java.lang.Exception: UnsupportedVersionException: MetadataRequest versions 
> older than 4 don't support the allowAutoTopicCreation field
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:755)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1136)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1301)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
> at java.lang.Thread.run(Thread.java:748)
> [2021-02-16 12:05:11,735] ERROR [Worker clientId=connect-1, 
> groupId=connect-cluster] Uncaught exception in herder work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.Di
> stributedHerder)
> org.apache.kafka.connect.errors.ConnectException: API to get the get the end 
> offsets for topic 'connect-offsets' is unsupported on brokers at worker25:9092
> at 
> org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:680)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
> at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:136)
> at org.apache.kafka.connect.runtime.Worker.start(Worker.java:197)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:128)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:311)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnsupportedVersionException: MetadataRequest 
> versions older than 4 don't support the allowAutoTopicCre
> ation field
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at 
> org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
> ... 11 more   {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-19 Thread Randall Hauch
Hi, Sophie.

I have three new blockers related to a previous fix applied since 2.6.1
that I'd like to merge to the 2.6 branch:
* https://issues.apache.org/jira/browse/KAFKA-12340 -- potential resource
leak for Kafka*BackingStores used in Connect; PR is already approved and in
trunk
* https://issues.apache.org/jira/browse/KAFKA-12339 -- may cause Connect
worker to fail upon creation of internal topic (race condition); PR will
likely be approved shortly
* https://issues.apache.org/jira/browse/KAFKA-12343 -- Connect fails to
read internal topics on AK 0.10.x; PR will likely be approved shortly

Please let me know if it's okay to merge these to the 2.6 branch for
inclusion in 2.6.2. Thanks!

Randall

On Fri, Feb 12, 2021 at 11:35 PM Randall Hauch  wrote:

> This fix is merged and backported. Thanks!
>
> Best regards,
>
> Randall
>
> On Fri, Feb 12, 2021 at 12:22 PM Randall Hauch  wrote:
>
>> Hi, Sophie:
>>
>> David J. has found a regression in MirrorMaker 2 that prevents the MM2
>> executable from starting:
>> https://issues.apache.org/jira/browse/KAFKA-12326. This was caused by a
>> recent fix of mine (https://issues.apache.org/jira/browse/KAFKA-10021)
>> and is a serious regression limited to the MirrorMaker 2 executable. I have
>> a one-line PR to fix the regression (
>> https://github.com/apache/kafka/pull/10122) and have verified it
>> corrects the MM2 executable. Once the PR is approved and with your
>> approval, I can cherry-pick to the `2.6` branch.
>>
>> Best regards,
>>
>> Randall
>>
>> On Wed, Feb 10, 2021 at 12:43 PM Sophie Blee-Goldman 
>> wrote:
>>
>>> Ok, thanks for the update!
>>>
>>> On Wed, Feb 10, 2021 at 1:06 AM Luke Chen  wrote:
>>>
>>> > Hi Ismael & Sophie,
>>> > Sorry, after some investigation, I think this might not be a defect. To
>>> > work with the project with a specific scala version, user might need
>>> to use
>>> > the same version as the project used. This issue also happened on other
>>> > projects using scala, ex: Spark. ref:
>>> https://stackoverflow.com/a/61677956
>>> > .
>>> >
>>> > So, you can continue to cut the rc.
>>> >
>>> > Thank you very much.
>>> > Luke
>>> >
>>> > On Wed, Feb 10, 2021 at 11:19 AM Luke Chen  wrote:
>>> >
>>> > > I just saw the defect KAFKA-12312
>>> > > <https://issues.apache.org/jira/browse/KAFKA-12312>, so I brought
>>> it to
>>> > > your attention.
>>> > > Do you think it's not a compatibility issue? If not, I think we don't
>>> > need
>>> > > to cherry-pick the fix.
>>> > >
>>> > > Thanks.
>>> > > Luke
>>> > >
>>> > > On Wed, Feb 10, 2021 at 11:16 AM Ismael Juma 
>>> wrote:
>>> > >
>>> > >> It's a perf improvement, there was no regression. I think Luke
>>> needs to
>>> > be
>>> > >> clearer how this impacts users. Luke, are you referring to cases
>>> where
>>> > >> someone runs the broker in an embedded scenario (eg tests)?
>>> > >>
>>> > >> Ismael
>>> > >>
>>> > >> On Tue, Feb 9, 2021, 6:50 PM Sophie Blee-Goldman <
>>> sop...@confluent.io>
>>> > >> wrote:
>>> > >>
>>> > >> > What do you think Ismael? I agreed initially because I saw the
>>> commit
>>> > >> > message says it fixes a performance regression. But admittedly I
>>> don't
>>> > >> have
>>> > >> > much context on this particular issue
>>> > >> >
>>> > >> > If it's low risk then I don't have a strong argument against
>>> including
>>> > >> it.
>>> > >> > However
>>> > >> > I aim to cut the rc tomorrow or Thursday, and if it hasn't been
>>> > >> > cherrypicked by then
>>> > >> > I won't block the release on it.
>>> > >> >
>>> > >> > On Tue, Feb 9, 2021 at 4:53 PM Luke Chen 
>>> wrote:
>>> > >> >
>>> > >> > > Hi Ismael,
>>> > >> > > Yes, I agree it's like an improvement, not a bug. I don't
>>> insist on
>>> > >> > putting
>>> > >> > > it into 2.6, just 

[jira] [Created] (KAFKA-12343) Recent change to use SharedTopicAdmin in KakfkaBasedLog fails with AK 0.10.x brokers

2021-02-18 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12343:
-

 Summary: Recent change to use SharedTopicAdmin in KakfkaBasedLog 
fails with AK 0.10.x brokers
 Key: KAFKA-12343
 URL: https://issues.apache.org/jira/browse/KAFKA-12343
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
Reporter: Randall Hauch
Assignee: Randall Hauch


System test failure:
{code:java}

Java.lang.Exception: UnsupportedVersionException: MetadataRequest versions 
older than 4 don't support the allowAutoTopicCreation field
at 
org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:755)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1136)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1301)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
at java.lang.Thread.run(Thread.java:748)
[2021-02-16 12:05:11,735] ERROR [Worker clientId=connect-1, 
groupId=connect-cluster] Uncaught exception in herder work thread, exiting:  
(org.apache.kafka.connect.runtime.distributed.Di
stributedHerder)
org.apache.kafka.connect.errors.ConnectException: API to get the get the end 
offsets for topic 'connect-offsets' is unsupported on brokers at worker25:9092
at 
org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:680)
at 
org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
at 
org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
at 
org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:136)
at org.apache.kafka.connect.runtime.Worker.start(Worker.java:197)
at 
org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:128)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:311)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnsupportedVersionException: MetadataRequest 
versions older than 4 don't support the allowAutoTopicCre
ation field
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at 
org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
... 11 more   {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12340) Recent change to use SharedTopicAdmin results in potential resource leak in deprecated backing store constructors

2021-02-18 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12340:
-

 Summary: Recent change to use SharedTopicAdmin results in 
potential resource leak in deprecated backing store constructors
 Key: KAFKA-12340
 URL: https://issues.apache.org/jira/browse/KAFKA-12340
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2


When KAFKA-10021 modified the Connect `Kafka*BackingStore` classes, we 
deprecated the old constructors and changed all uses within AK to use the new 
constructors that take a `Supplier`.

If the old deprecated constructors are used (outside of AK), then they will not 
close the Admin clients that are created by the "default" supplier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-12 Thread Randall Hauch
This fix is merged and backported. Thanks!

Best regards,

Randall

On Fri, Feb 12, 2021 at 12:22 PM Randall Hauch  wrote:

> Hi, Sophie:
>
> David J. has found a regression in MirrorMaker 2 that prevents the MM2
> executable from starting:
> https://issues.apache.org/jira/browse/KAFKA-12326. This was caused by a
> recent fix of mine (https://issues.apache.org/jira/browse/KAFKA-10021)
> and is a serious regression limited to the MirrorMaker 2 executable. I have
> a one-line PR to fix the regression (
> https://github.com/apache/kafka/pull/10122) and have verified it corrects
> the MM2 executable. Once the PR is approved and with your approval, I can
> cherry-pick to the `2.6` branch.
>
> Best regards,
>
> Randall
>
> On Wed, Feb 10, 2021 at 12:43 PM Sophie Blee-Goldman 
> wrote:
>
>> Ok, thanks for the update!
>>
>> On Wed, Feb 10, 2021 at 1:06 AM Luke Chen  wrote:
>>
>> > Hi Ismael & Sophie,
>> > Sorry, after some investigation, I think this might not be a defect. To
>> > work with the project with a specific scala version, user might need to
>> use
>> > the same version as the project used. This issue also happened on other
>> > projects using scala, ex: Spark. ref:
>> https://stackoverflow.com/a/61677956
>> > .
>> >
>> > So, you can continue to cut the rc.
>> >
>> > Thank you very much.
>> > Luke
>> >
>> > On Wed, Feb 10, 2021 at 11:19 AM Luke Chen  wrote:
>> >
>> > > I just saw the defect KAFKA-12312
>> > > <https://issues.apache.org/jira/browse/KAFKA-12312>, so I brought it
>> to
>> > > your attention.
>> > > Do you think it's not a compatibility issue? If not, I think we don't
>> > need
>> > > to cherry-pick the fix.
>> > >
>> > > Thanks.
>> > > Luke
>> > >
>> > > On Wed, Feb 10, 2021 at 11:16 AM Ismael Juma 
>> wrote:
>> > >
>> > >> It's a perf improvement, there was no regression. I think Luke needs
>> to
>> > be
>> > >> clearer how this impacts users. Luke, are you referring to cases
>> where
>> > >> someone runs the broker in an embedded scenario (eg tests)?
>> > >>
>> > >> Ismael
>> > >>
>> > >> On Tue, Feb 9, 2021, 6:50 PM Sophie Blee-Goldman <
>> sop...@confluent.io>
>> > >> wrote:
>> > >>
>> > >> > What do you think Ismael? I agreed initially because I saw the
>> commit
>> > >> > message says it fixes a performance regression. But admittedly I
>> don't
>> > >> have
>> > >> > much context on this particular issue
>> > >> >
>> > >> > If it's low risk then I don't have a strong argument against
>> including
>> > >> it.
>> > >> > However
>> > >> > I aim to cut the rc tomorrow or Thursday, and if it hasn't been
>> > >> > cherrypicked by then
>> > >> > I won't block the release on it.
>> > >> >
>> > >> > On Tue, Feb 9, 2021 at 4:53 PM Luke Chen 
>> wrote:
>> > >> >
>> > >> > > Hi Ismael,
>> > >> > > Yes, I agree it's like an improvement, not a bug. I don't insist
>> on
>> > >> > putting
>> > >> > > it into 2.6, just want to bring it to your attention.
>> > >> > > In my opinion, this issue will block users who adopt the scala
>> > 2.13.4
>> > >> or
>> > >> > > later to use Kafka 2.6.
>> > >> > > So if we still have time, we can consider to cherry-pick the fix
>> > into
>> > >> 2.6
>> > >> > > and 2.7.
>> > >> > >
>> > >> > > What do you think?
>> > >> > >
>> > >> > > Thank you.
>> > >> > > Luke
>> > >> > >
>> > >> > > On Wed, Feb 10, 2021 at 3:24 AM Ismael Juma 
>> > >> wrote:
>> > >> > >
>> > >> > > > Can you elaborate why this needs to be in 2.6? Seems like an
>> > >> > improvement
>> > >> > > > versus a critical bug fix.
>> > >> > > >
>> > >> > > > Ismael
>> > >> > > >
>> > >> > > > On M

Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-12 Thread Randall Hauch
Hi, Sophie:

David J. has found a regression in MirrorMaker 2 that prevents the MM2
executable from starting: https://issues.apache.org/jira/browse/KAFKA-12326.
This was caused by a recent fix of mine (
https://issues.apache.org/jira/browse/KAFKA-10021) and is a serious
regression limited to the MirrorMaker 2 executable. I have a one-line PR to
fix the regression (https://github.com/apache/kafka/pull/10122) and have
verified it corrects the MM2 executable. Once the PR is approved and with
your approval, I can cherry-pick to the `2.6` branch.

Best regards,

Randall

On Wed, Feb 10, 2021 at 12:43 PM Sophie Blee-Goldman 
wrote:

> Ok, thanks for the update!
>
> On Wed, Feb 10, 2021 at 1:06 AM Luke Chen  wrote:
>
> > Hi Ismael & Sophie,
> > Sorry, after some investigation, I think this might not be a defect. To
> > work with the project with a specific scala version, user might need to
> use
> > the same version as the project used. This issue also happened on other
> > projects using scala, ex: Spark. ref:
> https://stackoverflow.com/a/61677956
> > .
> >
> > So, you can continue to cut the rc.
> >
> > Thank you very much.
> > Luke
> >
> > On Wed, Feb 10, 2021 at 11:19 AM Luke Chen  wrote:
> >
> > > I just saw the defect KAFKA-12312
> > > , so I brought it
> to
> > > your attention.
> > > Do you think it's not a compatibility issue? If not, I think we don't
> > need
> > > to cherry-pick the fix.
> > >
> > > Thanks.
> > > Luke
> > >
> > > On Wed, Feb 10, 2021 at 11:16 AM Ismael Juma 
> wrote:
> > >
> > >> It's a perf improvement, there was no regression. I think Luke needs
> to
> > be
> > >> clearer how this impacts users. Luke, are you referring to cases where
> > >> someone runs the broker in an embedded scenario (eg tests)?
> > >>
> > >> Ismael
> > >>
> > >> On Tue, Feb 9, 2021, 6:50 PM Sophie Blee-Goldman  >
> > >> wrote:
> > >>
> > >> > What do you think Ismael? I agreed initially because I saw the
> commit
> > >> > message says it fixes a performance regression. But admittedly I
> don't
> > >> have
> > >> > much context on this particular issue
> > >> >
> > >> > If it's low risk then I don't have a strong argument against
> including
> > >> it.
> > >> > However
> > >> > I aim to cut the rc tomorrow or Thursday, and if it hasn't been
> > >> > cherrypicked by then
> > >> > I won't block the release on it.
> > >> >
> > >> > On Tue, Feb 9, 2021 at 4:53 PM Luke Chen  wrote:
> > >> >
> > >> > > Hi Ismael,
> > >> > > Yes, I agree it's like an improvement, not a bug. I don't insist
> on
> > >> > putting
> > >> > > it into 2.6, just want to bring it to your attention.
> > >> > > In my opinion, this issue will block users who adopt the scala
> > 2.13.4
> > >> or
> > >> > > later to use Kafka 2.6.
> > >> > > So if we still have time, we can consider to cherry-pick the fix
> > into
> > >> 2.6
> > >> > > and 2.7.
> > >> > >
> > >> > > What do you think?
> > >> > >
> > >> > > Thank you.
> > >> > > Luke
> > >> > >
> > >> > > On Wed, Feb 10, 2021 at 3:24 AM Ismael Juma 
> > >> wrote:
> > >> > >
> > >> > > > Can you elaborate why this needs to be in 2.6? Seems like an
> > >> > improvement
> > >> > > > versus a critical bug fix.
> > >> > > >
> > >> > > > Ismael
> > >> > > >
> > >> > > > On Mon, Feb 8, 2021 at 6:39 PM Luke Chen 
> > wrote:
> > >> > > >
> > >> > > > > Hi Sophie,
> > >> > > > > I found there is 1 issue that should be cherry-picked into 2.6
> > and
> > >> > 2.7
> > >> > > > > branches: KAFKA-12312 <
> > >> > > https://issues.apache.org/jira/browse/KAFKA-12312
> > >> > > > >.
> > >> > > > > Simply put, *Scala* *2.13.4* is released at the end of 2020,
> and
> > >> we
> > >> > > > > upgraded to it and fixed some compatible issues on this PR
> > >> > > > > , more
> specifically,
> > >> it's
> > >> > > > here
> > >> > > > > <
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> https://github.com/apache/kafka/pull/9643/files#diff-fda3fb44e69a19600913bd951431fb0035996c76325b1c1d84d6f34bec281205R292
> > >> > > > > >
> > >> > > > > .
> > >> > > > > We only merged this fix on *trunk*(which will be on 2.8), but
> we
> > >> > didn't
> > >> > > > > tell users (or we didn't know there'll be compatible issues)
> not
> > >> to
> > >> > > adopt
> > >> > > > > the latest *Scala* *2.13.4*.
> > >> > > > >
> > >> > > > > Therefore, I think we should cherry-pick this fix into 2.6 and
> > 2.7
> > >> > > > > branches. What do you think?
> > >> > > > >
> > >> > > > > Thank you.
> > >> > > > > Luke
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > > > > On Tue, Feb 9, 2021 at 3:10 AM Sophie Blee-Goldman <
> > >> > > sop...@confluent.io>
> > >> > > > > wrote:
> > >> > > > >
> > >> > > > > > Hey all,
> > >> > > > > >
> > >> > > > > > Since all outstanding bugfixes seem to have made their way
> > over
> > >> to
> > >> > > the
> > >> > > > > 2.6
> > >> > > > > > branch by now, I plan to move ahead with c

[jira] [Resolved] (KAFKA-12270) Kafka Connect may fail a task when racing to create topic

2021-02-03 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12270.
---
Fix Version/s: 2.6.2
   2.7.1
   2.8.0
 Reviewer: Konstantine Karantasis
   Resolution: Fixed

Merged to `trunk` for the upcoming 2.8.0, and cherrypicked to the 2.7 branch 
for the next 2.7.1 and to the 2.6 branch for the next 2.6.2.

> Kafka Connect may fail a task when racing to create topic
> -
>
> Key: KAFKA-12270
> URL: https://issues.apache.org/jira/browse/KAFKA-12270
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.6.0, 2.7.0, 2.8.0
>    Reporter: Randall Hauch
>    Assignee: Randall Hauch
>Priority: Critical
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> When a source connector configured with many tasks and to use the new topic 
> creation feature is run, it is possible that multiple tasks will attempt to 
> write to the same topic, will see that the topic does not exist, and then 
> race to create the topic. The topic is only created once, but some tasks 
> might fail with:
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Task failed to create new 
> topic (name=TOPICX, numPartitions=8, replicationFactor=3, 
> replicasAssignments=null, configs={cleanup.policy=delete}). Ensure that the 
> task is authorized to create topics or that the topic exists and restart the 
> task
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.maybeCreateTopic(WorkerSourceTask.java:436)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:364)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
>   at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
> ... {code}
> The reason appears to be that the WorkerSourceTask throws an exception if the 
> topic creation failed, and does not account for the fact that the topic may 
> have been created between the time the WorkerSourceTask lists existing topics 
> and tries to create the topic.
>  
> See in particular: 
> [https://github.com/apache/kafka/blob/5c562efb2d76407011ea88c1ca1b2355079935bc/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L415-L423]
>  
> This is only an issue when using topic creation settings in the source 
> connector configuration, and when running multiple tasks that write to the 
> same topic.
> The workaround is to create the topics manually before starting the 
> connector, or to simply restart the failed tasks using the REST API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12270) Kafka Connect may fail a task when racing to create topic

2021-02-02 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12270:
-

 Summary: Kafka Connect may fail a task when racing to create topic
 Key: KAFKA-12270
 URL: https://issues.apache.org/jira/browse/KAFKA-12270
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.7.0, 2.6.0, 2.8.0
Reporter: Randall Hauch
Assignee: Randall Hauch


When a source connector configured with many tasks and to use the new topic 
creation feature is run, it is possible that multiple tasks will attempt to 
write to the same topic, will see that the topic does not exist, and then race 
to create the topic. The topic is only created once, but some tasks might fail 
with:
{code:java}
org.apache.kafka.connect.errors.ConnectException: Task failed to create new 
topic (name=TOPICX, numPartitions=8, replicationFactor=3, 
replicasAssignments=null, configs={cleanup.policy=delete}). Ensure that the 
task is authorized to create topics or that the topic exists and restart the 
task
  at 
org.apache.kafka.connect.runtime.WorkerSourceTask.maybeCreateTopic(WorkerSourceTask.java:436)
  at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:364)
  at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
... {code}
The reason appears to be that the WorkerSourceTask throws an exception if the 
topic creation failed, and does not account for the fact that the topic may 
have been created between the time the WorkerSourceTask lists existing topics 
and tries to create the topic.

 

See in particular: 
https://github.com/apache/kafka/blob/5c562efb2d76407011ea88c1ca1b2355079935bc/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L415-L423



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Connect] Different validation requirements for connector creation and update

2021-01-29 Thread Randall Hauch
Thanks for raising this issue, Gunnar.

It is a shortcoming that Connect does not differentiate between starting
for the first time and restarting, nor between validating prior to
connector creation vs (re)validating a (potentially modified) connector
configuration while the connector is running. Proposing a KIP certainly
would be fine, though we do need to weigh this against increasing the
complexity of the APIs.

In the meantime, Chris did have some good suggestions for how a connector
might be able to deal with the current limitation. ATM I can't think of any
other obvious workarounds.

Best regards,

Randall


On Thu, Jan 21, 2021 at 9:52 AM Chris Egerton  wrote:

> Hi Gunnar,
>
> It's not possible to do this in a generalized fashion with the API provided
> by the framework today. Trying to hack your way around things by setting a
> flag or storing the connector name in some shared JVM state wouldn't work
> in a cluster with more than one worker since that state would obviously not
> be available across workers.
>
> With the specific case of the Debezium PostgreSQL connector, I'm wondering
> if you might be able to store the name of the connector in some external
> system (likely either the database itself or a Kafka topic, as I seem to
> recall that Debezium connectors create and consume from topics outside of
> the framework) after successfully claiming the replication slot. Then,
> during config validation, you could skip the replication slot validation if
> that stored name matched the name of the connector being validated. There
> are obviously some edge cases that'd need to be addressed such as sudden
> death of connectors after claiming the replication slot but before storing
> their name; just wanted to share the thought in case it leads somewhere
> useful.
>
> Either way, I think a small, simple KIP for this would be fine, as long as
> we could maintain backwards compatibility for existing connectors and allow
> connectors that use this new API to work on older versions of Connect that
> don't have support for it.
>
> Cheers,
>
> Chris
>
> On Thu, Jan 21, 2021 at 6:00 AM Gunnar Morling 
> wrote:
>
> > Hi,
> >
> > In the Debezium community, we ran into an interesting corner case of
> > connector config validation [1].
> >
> > The Debezium Postgres connector requires a database resource called a
> > "replication slot", which identifies this connector to the database and
> > tracks progress it has made reading the TX log. This replication slot
> must
> > not be shared between multiple clients (Debezium connectors, or others),
> so
> > we added a validation to make sure that the slot configured by the user
> > isn't active, i.e. no client is connected to it already. This works as
> > expected when setting up, or restarting a connector, but when trying to
> > update the connector configuration, the connector still is running when
> the
> > configuration is validated, so the slot is active and validation hence
> > fails.
> >
> > Is there a way we can distinguish during config validation whether the
> > connector is (re-)started or whether it's a validation upon
> > re-configuration (allowing us to skip this particular validation in the
> > re-configuration case)?
> >
> > If that's not the case, would there be interest for a KIP for adding such
> > capability to the Kafka Connect API?
> >
> > Thanks for any feedback,
> >
> > --Gunnar
> >
> > [1] https://issues.redhat.com/browse/DBZ-2952
> >
>


Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2021-01-25 Thread Randall Hauch
Thanks for the quick response, Tom, and thanks again for tweaking the
wording on KIP-676. We can absolutely revisit this in the future if it
becomes more of an issue.

If anyone else disagrees, please say so.

Best regards,

Randall

On Mon, Jan 25, 2021 at 9:19 AM Tom Bentley  wrote:

> Hi Randall,
>
> I agree that Kafka Connect's API is more usable given that the user of it
> knows the semantics (and KIP-495 is clear on that point). So perhaps this
> inconsistency isn't enough of a problem that it's worth fixing, at least
> not at the moment.
>
> Kind regards,
>
> Tom
>
> On Fri, Jan 22, 2021 at 6:36 PM Randall Hauch  wrote:
>
> > Thanks for updating the wording in KIP-676.
> >
> > I guess the best path forward depends on what we think needs to change.
> If
> > we think KIP-676 and the changes already made in AK 2.8 are not quite
> > right, then maybe we should address this either by fixing the changes
> (and
> > maybe updating KIP-676 as needed) or reverting the changes if bigger
> > changes are necessary.
> >
> > OTOH, if we think KIP-676 and the changes already made in AK 2.8 are
> > correct but we just need to update Connect to have similar behavior,
> then I
> > don't see why we'd consider reverting the KIP-676 changes in AK 2.8. We
> > could pass another KIP that amends the Connect dynamic logging REST API
> > behavior and fix that in AK 3.0 (if there's not enough time for 2.8).
> >
> > However, it's not clear to me that the Connect dynamic logging REST API
> > behavior is incorrect. The API only allows setting one level at a time
> > (which is different than a Log4J configuration file), and so order
> matters.
> > Consider this case:
> > 1. PUT '{"level": "TRACE"}'
> > http://localhost:8083/admin/loggers/org.apache.kafka.connect
> > 2. PUT '{"level": "DEBUG"}'
> > http://localhost:8083/admin/loggers/org.apache.kafka
> >
> > Is the second call intended to take precedence and override the first, or
> > was the second not taking precedence over and instead augmenting the
> first?
> > KIP-495 really considers the latest call to take precedence over all
> prior
> > ones. This is simple, and ordering the updates can be used to get the
> > desired behavior. For example, swapping the order of these calls easily
> > gets the desired behavior of `org.apache.kafka.connect` (and descendants)
> > are at a TRACE level.
> >
> > IIUC, you seem to suggest that step 2 should not override the fact that
> > step 1 had already set the logger for `org.apache.kafka.connect`. In
> order
> > to do this, we'd have to track all of the dynamic settings made since the
> > VM started, support unsetting (deleting) previously-set levels, and take
> > all of them into account when any changes are made to potentially apply
> the
> > net effect of all dynamic settings across all logger contexts. Plus, we'd
> > need the ability to list the set contexts just to know what could or
> should
> > be deleted, and we'd have to remember the original state defined by the
> log
> > config so that when dynamic logging context levels are deleted we can
> > properly revert to the correct value (if not overridden by a higher-level
> > dynamic context).
> >
> > In short, this dramatically increases the complexity of both the
> > implementation and the UX behavior, and it's not clear whether all that
> > complexity really adds much value. WDYT?
> >
> > Best regards,
> >
> > Randall
> >
> > On Fri, Jan 22, 2021 at 7:58 AM Tom Bentley  wrote:
> >
> > > Hi Randall,
> > >
> > > Thanks for pointing this out. You're quite right about the behaviour of
> > the
> > > LoggingResource, and I've updated the KIP with your suggested wording.
> > >
> > > However, looking at it has made me realise that while KIP-676 means the
> > > logger levels are now hierarchical there's still an inconsistency
> between
> > > how levels are set in Kafka Connect and how it works in the broker.
> > >
> > > In log4j you can configure foo.baz=DEBUG and then foo=INFO and debug
> > > messages from foo.baz.Y will continue to be logged because setting the
> > > parent doesn't override all descendents (the level is inherited). As
> you
> > > know, in Kafka Connect, the way the log setting works is to find all
> the
> > > descendent loggers of foo and apply the given level to them, so setting
> > > foo.baz=DEBUG an

Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2021-01-22 Thread Randall Hauch
Thanks for updating the wording in KIP-676.

I guess the best path forward depends on what we think needs to change. If
we think KIP-676 and the changes already made in AK 2.8 are not quite
right, then maybe we should address this either by fixing the changes (and
maybe updating KIP-676 as needed) or reverting the changes if bigger
changes are necessary.

OTOH, if we think KIP-676 and the changes already made in AK 2.8 are
correct but we just need to update Connect to have similar behavior, then I
don't see why we'd consider reverting the KIP-676 changes in AK 2.8. We
could pass another KIP that amends the Connect dynamic logging REST API
behavior and fix that in AK 3.0 (if there's not enough time for 2.8).

However, it's not clear to me that the Connect dynamic logging REST API
behavior is incorrect. The API only allows setting one level at a time
(which is different than a Log4J configuration file), and so order matters.
Consider this case:
1. PUT '{"level": "TRACE"}'
http://localhost:8083/admin/loggers/org.apache.kafka.connect
2. PUT '{"level": "DEBUG"}'
http://localhost:8083/admin/loggers/org.apache.kafka

Is the second call intended to take precedence and override the first, or
was the second not taking precedence over and instead augmenting the first?
KIP-495 really considers the latest call to take precedence over all prior
ones. This is simple, and ordering the updates can be used to get the
desired behavior. For example, swapping the order of these calls easily
gets the desired behavior of `org.apache.kafka.connect` (and descendants)
are at a TRACE level.

IIUC, you seem to suggest that step 2 should not override the fact that
step 1 had already set the logger for `org.apache.kafka.connect`. In order
to do this, we'd have to track all of the dynamic settings made since the
VM started, support unsetting (deleting) previously-set levels, and take
all of them into account when any changes are made to potentially apply the
net effect of all dynamic settings across all logger contexts. Plus, we'd
need the ability to list the set contexts just to know what could or should
be deleted, and we'd have to remember the original state defined by the log
config so that when dynamic logging context levels are deleted we can
properly revert to the correct value (if not overridden by a higher-level
dynamic context).

In short, this dramatically increases the complexity of both the
implementation and the UX behavior, and it's not clear whether all that
complexity really adds much value. WDYT?

Best regards,

Randall

On Fri, Jan 22, 2021 at 7:58 AM Tom Bentley  wrote:

> Hi Randall,
>
> Thanks for pointing this out. You're quite right about the behaviour of the
> LoggingResource, and I've updated the KIP with your suggested wording.
>
> However, looking at it has made me realise that while KIP-676 means the
> logger levels are now hierarchical there's still an inconsistency between
> how levels are set in Kafka Connect and how it works in the broker.
>
> In log4j you can configure foo.baz=DEBUG and then foo=INFO and debug
> messages from foo.baz.Y will continue to be logged because setting the
> parent doesn't override all descendents (the level is inherited). As you
> know, in Kafka Connect, the way the log setting works is to find all the
> descendent loggers of foo and apply the given level to them, so setting
> foo.baz=DEBUG and then foo=INFO means foo.baz.Y debug messages will not
> appear.
>
> Obviously that behavior for Connect is explicitly stated in KIP-495, but I
> can't help but feel that the KIP-676 changes not addressing this is a lost
> opportunity.
>
> It's also worth bearing in mind that KIP-653[1] is (hopefully) going to
> happen for Kafka 3.0.
>
> So I wonder if perhaps the pragmatic thing to do would be to:
>
> 1. Revert the changes for KIP-676 for Kafka 2.8
> 2. Pass another KIP, to be implemented for Kafka 3.0, which makes all the
> Kafka APIs consistent in both respecting the hierarchy and also in what
> updating a logger level means.
>
> I don't have a particularly strong preference either way, but it seems
> better, from a users PoV, if all these logging changes happened in a major
> release and achieved consistency across components going forward.
>
> Thoughts?
>
> Kind regards,
>
> Tom
>
> [1]:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
>
>
>
> On Thu, Jan 21, 2021 at 9:17 PM Randall Hauch  wrote:
>
> > Tom, et al.,
> >
> > I'm really late to the party and mistakenly thought the scope of this KIP
> > included only the broker. But I now see in the KIP-676 [1] text the
> > following claim:
> >
> > Kafka exposes

Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2021-01-21 Thread Randall Hauch
Tom, et al.,

I'm really late to the party and mistakenly thought the scope of this KIP
included only the broker. But I now see in the KIP-676 [1] text the
following claim:

Kafka exposes a number of APIs for describing and changing logger levels:
>
> * The Kafka broker exposes the DescribeConfigs RPC with the BROKER_LOGGER
> config resource.
> * Broker logger levels can also be configured using the
> Log4jControllerMBean MBean, exposed through JMX as
> kafka:type=kafka.Log4jController.
> * Kafka Connect exposes the /admin/loggers REST API endpoint for
> describing and changing logger levels.
>
> When accessing a logger's level these APIs do not respect the logger
> hierarchy.
> Instead, if the logger's level is not explicitly set the level of the root
> logger is used, even when an intermediate logger is configured with a
> different level.
>

Regarding Connect, the third bullet is accurate: Kafka
Connect's `/admin/loggers/` REST API introduced via KIP-495 [2] does
describe and change logger levels.

But the first sentence after the bullets is inaccurate, because per KIP-495
the Kafka Connect `/admin/loggers/` REST API does respect the hierarchy. In
fact, this case is explicitly called out in KIP-495:

Setting the log level of an ancestor (for example,
> `org.apache.kafka.connect` as opposed to a classname) will update the
> levels of all child classes.
>

and there are even unit tests in Connect for that case [3].

Can we modify this sentence in KIP-676 to reflect this? One way to minimize
the changes to the already-approved KIP is to change:

> When accessing a logger's level these APIs do not respect the logger
> hierarchy.

to

> When accessing a logger's level the first two of these APIs do not respect
> the logger hierarchy.


Thoughts?

Randall

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
[2]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-495%3A+Dynamically+Adjust+Log+Levels+in+Connect
[3]
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/test/java/org/apache/kafka/connect/runtime/rest/resources/LoggingResourceTest.java#L118-L124

On Thu, Oct 8, 2020 at 9:56 AM Dongjin Lee  wrote:

> Hi Tom,
>
> I also agree that the current behavior is clearly wrong and I think it was
> mistakenly omitted in the KIP-412 discussion process. The current
> implementation does not reflect log4j's logger hierarchy.
>
> Regards,
> Dongjin
>
> On Thu, Oct 8, 2020 at 1:27 AM John Roesler  wrote:
>
> > Ah, thanks Tom,
> >
> > My only concern was that we might silently start logging a
> > lot more or less after the upgrade, but if the logging
> > behavior won't change at all, then the concern is moot.
> >
> > Since the KIP is only to make the APIs return an accurrate
> > representation of the actual log level, I have no concerns
> > at all.
> >
> > Thanks,
> > -John
> >
> > On Wed, 2020-10-07 at 17:00 +0100, Tom Bentley wrote:
> > > Hi John,
> > >
> > > You're right, but note that this affects the level the broker/connect
> > > worker was _reporting_ for that logger, not the level at which the
> logger
> > > was actually logging, which would be TRACE both before and after
> > upgrading.
> > >
> > > I've added more of an explanation to the KIP, since it wasn't very
> clear.
> > >
> > > Thanks for taking a look.
> > >
> > > Tom
> > >
> > > On Wed, Oct 7, 2020 at 4:29 PM John Roesler 
> wrote:
> > >
> > > > Thanks for this KIP Tom,
> > > >
> > > > Just to clarify the impact: In your KIP you described a
> > > > situation in which the root logger is configured at INFO, an
> > > > "kafka.foo" is configured at TRACE, and then "kafka.foo.bar"
> > > > is resolved to INFO.
> > > >
> > > > Assuming this goes into 3.0, would it be the case that if I
> > > > had the above configuration, after upgrade, "kafka.foo.bar"
> > > > would just switch from INFO to TRACE on its own?
> > > >
> > > > It seems like it must, since it's not configured explicitly,
> > > > and we are changing the inheritance rule from "inherit
> > > > directly from root" to "inherit from the closest configured
> > > > ancestor in the hierarchy".
> > > >
> > > > Am I thinking about this right?
> > > >
> > > > Thanks,
> > > > -John
> > > >
> > > > On Wed, 2020-10-07 at 15:42 +0100, Tom Bentley wrote:
> > > > > Hi all,
> > > > >
> > > > > I would like to start discussion on a small KIP which seeks to
> > rectify an
> > > > > inconsistency between how Kafka reports logger levels and how
> logger
> > > > > configuration is inherited hierarchically in log4j.
> > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
> > > > > <
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> > > > >
> > > > > If you have a few minutes to have a look I'd be grateful for any
> > > > feedback.
> > > > > Many thanks,
> > > > >
> > > > > Tom
> >
> >
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathemati

Re: [ANNOUNCE] Apache Kafka 2.7.0

2020-12-21 Thread Randall Hauch
Fantastic! Thanks for driving the release, Bill.

Congratulations to the whole Kafka community.

On Mon, Dec 21, 2020 at 5:55 PM Gwen Shapira  wrote:

> woooh!!!
>
> Great job on the release Bill and everyone!
>
> On Mon, Dec 21, 2020 at 8:01 AM Bill Bejeck  wrote:
> >
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 2.7.0
> >
> > * Configurable TCP connection timeout and improve the initial metadata
> fetch
> > * Enforce broker-wide and per-listener connection creation rate (KIP-612,
> > part 1)
> > * Throttle Create Topic, Create Partition and Delete Topic Operations
> > * Add TRACE-level end-to-end latency metrics to Streams
> > * Add Broker-side SCRAM Config API
> > * Support PEM format for SSL certificates and private key
> > * Add RocksDB Memory Consumption to RocksDB Metrics
> > * Add Sliding-Window support for Aggregations
> >
> > This release also includes a few other features, 53 improvements, and 91
> > bug fixes.
> >
> > All of the changes in this release can be found in the release notes:
> > https://www.apache.org/dist/kafka/2.7.0/RELEASE_NOTES.html
> >
> > You can read about some of the more prominent changes in the Apache Kafka
> > blog:
> > https://blogs.apache.org/kafka/entry/what-s-new-in-apache4
> >
> > You can download the source and binary release (Scala 2.12, 2.13) from:
> > https://kafka.apache.org/downloads#2.7.0
> >
> >
> ---
> >
> >
> > Apache Kafka is a distributed streaming platform with four core APIs:
> >
> >
> > ** The Producer API allows an application to publish a stream records to
> > one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an
> > output stream to one or more output topics, effectively transforming the
> > input streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might
> > capture every change to a table.
> >
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react
> > to the streams of data.
> >
> >
> > Apache Kafka is in use at large and small companies worldwide, including
> > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> >
> > A big thank you for the following 117 contributors to this release!
> >
> > A. Sophie Blee-Goldman, Aakash Shah, Adam Bellemare, Adem Efe Gencer,
> > albert02lowis, Alex Diachenko, Andras Katona, Andre Araujo, Andrew Choi,
> > Andrew Egelhofer, Andy Coates, Ankit Kumar, Anna Povzner, Antony Stubbs,
> > Arjun Satish, Ashish Roy, Auston, Badai Aqrandista, Benoit Maggi, bill,
> > Bill Bejeck, Bob Barrett, Boyang Chen, Brian Byrne, Bruno Cadonna, Can
> > Cecen, Cheng Tan, Chia-Ping Tsai, Chris Egerton, Colin Patrick McCabe,
> > David Arthur, David Jacot, David Mao, Dhruvil Shah, Dima Reznik, Edoardo
> > Comar, Ego, Evelyn Bayes, feyman2016, Gal Margalit, gnkoshelev, Gokul
> > Srinivas, Gonzalo Muñoz, Greg Harris, Guozhang Wang, high.lee,
> huangyiming,
> > huxi, Igor Soarez, Ismael Juma, Ivan Yurchenko, Jason Gustafson, Jeff
> Kim,
> > jeff kim, Jesse Gorzinski, jiameixie, Jim Galasyn, JoelWee, John Roesler,
> > John Thomas, Jorge Esteban Quilcate Otoya, Julien Jean Paul Sirocchi,
> > Justine Olshan, khairy, Konstantine Karantasis, Kowshik Prakasam, leah,
> Lee
> > Dongjin, Leonard Ge, Levani Kokhreidze, Lucas Bradstreet, Lucent-Wong,
> Luke
> > Chen, Mandar Tillu, manijndl7, Manikumar Reddy, Mario Molina, Matthias J.
> > Sax, Micah Paul Ramos, Michael Bingham, Mickael Maison, Navina Ramesh,
> > Nikhil Bhatia, Nikolay, Nikolay Izhikov, Ning Zhang, Nitesh Mor, Noa
> > Resare, Rajini Sivaram, Raman Verma, Randall Hauch, Rens Groothuijsen,
> > Richard Fussenegger, Rob Meng, Rohan, Ron Dagostino, Sanjana Kaun

[jira] [Created] (KAFKA-10816) Connect REST API should have a resource that can be used as a readiness probe

2020-12-07 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10816:
-

 Summary: Connect REST API should have a resource that can be used 
as a readiness probe
 Key: KAFKA-10816
 URL: https://issues.apache.org/jira/browse/KAFKA-10816
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch


There are a few ways to accurately detect whether a Connect worker is 
*completely* ready to process all REST requests:

# Wait for `Herder started` in the Connect worker logs
# Use the REST API to issue a request that will be completed only after the 
herder has started, such as `GET /connectors/{name}/` or `GET 
/connectors/{name}/status`.

Other techniques can be used to detect other startup states, though none of 
these will guarantee that the worker has indeed completely started up and can 
process all REST requests:

* `GET /` can be used to know when the REST server has started, but this may be 
before the worker has started completely and successfully.
* `GET /connectors` can be used to know when the REST server has started, but 
this may be before the worker has started completely and successfully. And, for 
the distributed Connect worker, this may actually return an older list of 
connectors if the worker hasn't yet completely read through the internal config 
topic. It's also possible that this request returns even if the worker is 
having trouble reading from the internal config topic.
* `GET /connector-plugins` can be used to know when the REST server has 
started, but this may be before the worker has started completely and 
successfully.

The Connect REST API should have an endpoint that more obviously and more 
simply can be used as a readiness probe. This could be a new resource (e.g., 
`GET /status`), though this would only work on newer Connect runtimes, and 
existing tooling, installations, and examples would have to be modified to take 
advantage of this feature (if it exists). 

Alternatively, we could make sure that the existing resources (e.g., `GET /` or 
`GET /connectors`) wait for the herder to start completely; this wouldn't 
require a KIP and it would not require clients use different technique for 
newer and older Connect runtimes. (Whether or not we back port this is another 
question altogether, since it's debatable whether the behavior of the existing 
REST resources is truly a bug.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10811) System exit from MirrorConnectorsIntegrationTest#testReplication

2020-12-04 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10811:
-

 Summary: System exit from 
MirrorConnectorsIntegrationTest#testReplication
 Key: KAFKA-10811
 URL: https://issues.apache.org/jira/browse/KAFKA-10811
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, mirrormaker
Affects Versions: 2.5.1, 2.6.0, 2.7.0, 2.8.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.7.0, 2.5.2, 2.6.1, 2.8.0


The MirrorConnectorsIntegrationTest::testReplication has been very frequently 
causing the build to fail with:

{noformat}
FAILURE: Build failed with an exception.
13:50:17  
13:50:17  * What went wrong:
13:50:17  Execution failed for task ':connect:mirror:integrationTest'.
13:50:17  > Process 'Gradle Test Executor 52' finished with non-zero exit value 
1
13:50:17This problem might be caused by incorrect test process 
configuration.
13:50:17Please refer to the test execution section in the User Manual at 
https://docs.gradle.org/6.7.1/userguide/java_testing.html#sec:test_execution
{noformat}

Even running this locally resulted in mostly failures, and specifically the 
`MirrorConnectorsIntegrationTest::testReplication` test method reliably fails 
due to the process being exited.

[~ChrisEgerton] traced this to the fact that these integration tests are 
creating multiple EmbeddedConnectCluster instances, each of which by default:
* mask the Exit procedures upon startup
* reset the Exit procedures upon stop

But since *each* cluster does this, then {{Exit.resetExitProcedure()}} is 
called when the first Connect cluster is stopped, and if any problems occur 
while the second Connect cluster is being stopped (e.g., the KafkaBasedLog 
produce thread is interrupted) then the Exit called by the Connect worker 
results in the termination of the JVM.

The solution is to change the MirrorConnectorsIntegrationTest to own the 
overriding of the exit procedures, and to tell the EmbeddedConnectCluster 
instances to not mask the exit procedures.

With these changes, running these tests locally made the tests always pass 
locally for me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10572) Rename MirrorMaker 2 blacklist configs for KIP-629

2020-10-20 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10572.
---
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk` branch and cherry-picked to the `2.7` branch for 
inclusion in 2.7.0.

> Rename MirrorMaker 2 blacklist configs for KIP-629
> --
>
> Key: KAFKA-10572
> URL: https://issues.apache.org/jira/browse/KAFKA-10572
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
> Fix For: 2.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10332) MirrorMaker2 fails to detect topic if remote topic is created first

2020-10-19 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10332.
---
Fix Version/s: 2.6.1
   2.5.2
   2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk` branch (for future 2.8 release), and cherry-picked to the 
`2.7` for inclusion in the upcoming 2.7.0, the `2.6` branch for inclusion in 
the next 2.6.1 if/when it's released, and the `2.5` branch for the next 2.5.2 
if/when it's released.

> MirrorMaker2 fails to detect topic if remote topic is created first
> ---
>
> Key: KAFKA-10332
> URL: https://issues.apache.org/jira/browse/KAFKA-10332
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 2.7.0, 2.5.2, 2.6.1
>
>
> Setup:
> - 2 clusters: source and target
> - Mirroring data from source to target
> - create a topic called source.mytopic on the target cluster
> - create a topic called mytopic on the source cluster
> At this point, MM2 does not start mirroring the topic.
> This also happens if you delete and recreate a topic that is being mirrored.
> The issue is in 
> [refreshTopicPartitions()|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceConnector.java#L211-L232]
>  which basically does a diff between the 2 clusters.
> When creating the topic on the source cluster last, it makes the partition 
> list of both clusters match, hence not triggering a reconfiguration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8370) Kafka Connect should check for existence of internal topics before attempting to create them

2020-10-16 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8370.
--
Resolution: Won't Fix

As mentioned above, we never can avoid the race condition of two Connect 
workers trying to create the same topic, and it's imperative that the 
create-topic request is handled atomically and throws TopicExistsException if 
the create-topic request fails because the topic already exists. KAFKA-8875 is 
now ensuring that happens, and Connect already properly handles the case when a 
create-topic request fails with TopicExistsException

The conclusion: there is no need for the check before creating the topic, 
because that is not guaranteed to be sufficient anyway.

> Kafka Connect should check for existence of internal topics before attempting 
> to create them
> 
>
> Key: KAFKA-8370
> URL: https://issues.apache.org/jira/browse/KAFKA-8370
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
>
> The Connect worker doesn't current check for the existence of the internal 
> topics, and instead is issuing a CreateTopic request and handling a 
> TopicExistsException. However, this can cause problems when the number of 
> brokers is fewer than the replication factor, *even if the topic already 
> exists* and the partitions of those topics all remain available on the 
> remaining brokers.
> One problem of the current approach is that the broker checks the requested 
> replication factor before checking for the existence of the topic, resulting 
> in unexpected exceptions when the topic does exist:
> {noformat}
> connect  | [2019-05-14 19:24:25,166] ERROR Uncaught exception in herder 
> work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> connect  | org.apache.kafka.connect.errors.ConnectException: Error while 
> attempting to create/find topic(s) 'connect-offsets'
> connect  |at 
> org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:255)
> connect  |at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore$1.run(KafkaOffsetBackingStore.java:99)
> connect  |at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:127)
> connect  |at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:109)
> connect  |at 
> org.apache.kafka.connect.runtime.Worker.start(Worker.java:164)
> connect  |at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:114)
> connect  |at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:214)
> connect  |at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> connect  |at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> connect  |at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> connect  |at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> connect  |at java.lang.Thread.run(Thread.java:748)
> connect  | Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> connect  |at 
> org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:228)
> connect  |... 11 more
> connect  | Caused by: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> connect  | [2019-05-14 19:24:25,168] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> {noformat}
> Instead of always issuing a CreateTopic request, the worker's admin client 
> should first check whether the topic exists, and if not *then* attempt to 
> create the topic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10573) Rename connect transform configs for KIP-629

2020-10-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10573.
---
Fix Version/s: (was: 2.8.0)
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` for inclusion in 2.8.0 (or whatever major/minor release 
follows 2.7.0), and to the `2.7` branch for inclusion in 2.7.0.

> Rename connect transform configs for KIP-629
> 
>
> Key: KAFKA-10573
> URL: https://issues.apache.org/jira/browse/KAFKA-10573
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.7.0
>
>
> Part of the implementation for 
> [KIP-629|https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10600) Connect adds error to property in validation result if connector does not define the property

2020-10-12 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10600:
-

 Summary: Connect adds error to property in validation result if 
connector does not define the property
 Key: KAFKA-10600
 URL: https://issues.apache.org/jira/browse/KAFKA-10600
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.0
Reporter: Randall Hauch


Kafka Connect's {{AbstractHerder.generateResult(...)}} method is responsible 
for taking the result of a {{Connector.validate(...)}} call and constructing 
the {{ConfigInfos}} object that is then mapped to the JSON representation.

As this method (see 
[code|https://github.com/apache/kafka/blob/1f8ac6e6fee3aa404fc1a4c01ac2e0c48429a306/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L504-L507])
 iterates over the {{ConfigKey}} objects in the connector's {{ConfigDef}} and 
the {{ConfigValue}} objects returned by the {{Connector.validate(...)}} method, 
this method adds an error message to any {{ConfigValue}} whose 
{{configValue.name()}} does not correspond to a {{ConfigKey}} in the 
connector's {{ConfigDef}}. 

{code}
if (!configKeys.containsKey(configName)) {
configValue.addErrorMessage("Configuration is not defined: " + 
configName);
configInfoList.add(new ConfigInfo(null, 
convertConfigValue(configValue, null)));
}
{code}

Interestingly, these errors are not included in the total error count of the 
response. Is that intentional??

This behavior does not allow connectors to report validation errors against 
extra properties not defined in the connector's {{ConfigDef}}. 

Consider a connector that allows arbitrary properties with some prefix (e.g., 
{{connection.*}}) to be included and used in the connector properties. One 
example is to supply additional properties to a JDBC connection, where the 
connector may not be able to know these "additional properties" in advance 
because the connector either works with multiple JDBC drivers or the connection 
properties allowed by a JDBC driver are many and/or vary over different JDBC 
driver versions or server versions.

Such "additional properties" are not prohibited by Connect API, yet if a 
connector implementation chooses to include any such additional properties in 
the {{Connector.validate(...)}} result (whether or not the corresponding 
{{ConfigValue}} has an error) then Connect will always add the following error 
to that property. 

{quote}
Configuration is not defined: 
{quote}

This code was in the 0.10.0.0 release of Kafka via the 
[PR|https://github.com/apache/kafka/pull/964] for KAFKA-3315, which is one of 
the tasks that implemented 
[KIP-26|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767]
 for Kafka Connect (approved and partially added in 0.9.0.0). There is no 
mention of "validation" in KIP-26 nor any followup KIP (that I can find).

I can kind of imagine the original thought process: any user-supplied property 
that is not defined by a {{ConfigDef}} is inherently an error. However, this 
assumption is not matched by any mention in the Connect API, documentation, or 
one of Connect's KIP.
IMO, this is a bug in the {{AbstractHerder}} that over-constrains the connector 
properties to be only those defined in the connector's {{ConfigDef}}.

Quite a few connectors already support additional properties, and it's perhaps 
only by chance that this happens to work: 
* If a connector does not override {{Connector.validate(...)}}, extra 
properties are not validated and therefore are not included in the resulting 
{{Config}} response with one {{ConfigValue}} per property defined in the 
connector's {{ConfigDef}}.
* If a connector does override {{Connector.validate(...)}} and includes in the 
{{Config}} response a {{ConfigValue}} for the any additional properties, the 
{{AbstractHerder.generateResults(...)}} method does add the error but does not 
include this error in the error count, which is actually used to determine if 
there are any validation problems before starting/updating the connector.

I propose that the {{AbstractHerder.generateResult(...)}} method be changed to 
not add it's error message to the validation result, and to properly handle all 
{{ConfigValue}} objects regardless of whether there is a corresponding 
{{ConfigKey}} in the connector's {{ConfigDef}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9546) Make FileStreamSourceTask extendable with generic streams

2020-09-29 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9546.
--
Resolution: Won't Fix

I'm going to close this as WONTFIX, per my previous comment.

> Make FileStreamSourceTask extendable with generic streams
> -
>
> Key: KAFKA-9546
> URL: https://issues.apache.org/jira/browse/KAFKA-9546
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Csaba Galyo
>Assignee: Csaba Galyo
>Priority: Major
>  Labels: connect-api, needs-kip
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Use case: I want to read a ZIP compressed text file with a file connector and 
> send it to Kafka.
> Currently, we have FileStreamSourceConnector which reads a \n delimited text 
> file. This connector always returns a task of type FileStreamSourceTask.
> The FileStreamSourceTask reads from stdio or opens a file InputStream. The 
> issue with this approach is that the input needs to be a text file, otherwise 
> it won't work. 
> The code should be modified so that users could change the default 
> InputStream to eg. ZipInputStream, or any other format. The code is currently 
> written in such a way that it's not possible to extend it, we cannot use a 
> different input stream. 
> See example here where the code got copy-pasted just so it could read from a 
> ZstdInputStream (which reads ZSTD compressed files): 
> [https://github.com/gcsaba2/kafka-zstd/tree/master/src/main/java/org/apache/kafka/connect/file]
>  
> I suggest 2 changes:
>  # FileStreamSourceConnector should be extendable to return tasks of 
> different types. These types would be input by the user through the 
> configuration map
>  # FileStreamSourceTask should be modified so it could be extended and child 
> classes could define different input streams.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-08-06 Thread Randall Hauch
Just to follow up, we've released AK 2.6.0 today. See the announcement
thread.

On Tue, Jul 14, 2020 at 4:01 PM Randall Hauch  wrote:

> I've just announced 2.6.0 RC0 in a vote thread to this list. If you find
> any issues, please reply to that "[VOTE] 2.6.0 RC0" thread.
>
> Thanks, and best regards!
>
> Randall
>
> On Fri, Jul 10, 2020 at 7:44 PM Matthias J. Sax  wrote:
>
>> Randall,
>>
>> we found another blocker:
>> https://issues.apache.org/jira/browse/KAFKA-10262
>>
>> Luckily, we have already a PR for it.
>>
>>
>> -Matthias
>>
>>
>> On 7/8/20 3:05 PM, Sophie Blee-Goldman wrote:
>> > Hey Randall,
>> >
>> > We just discovered another regression in 2.6:
>> > https://issues.apache.org/jira/browse/KAFKA-10249
>> >
>> > The fix is extremely straightforward -- only about two lines of actual
>> > code -- and low risk. It is a new regression introduced in 2.6 and
>> affects
>> > all Streams apps with any suppression or other in-memory state.
>> >
>> > The PR is already ready here: https://github.com/apache/kafka/pull/8996
>> >
>> > Best,
>> > Sophie
>> >
>> > On Wed, Jul 8, 2020 at 10:59 AM John Roesler 
>> wrote:
>> >
>> >> Hi Randall,
>> >>
>> >> While developing system tests, I've just unearthed a new 2.6
>> regression:
>> >> https://issues.apache.org/jira/browse/KAFKA-10247
>> >>
>> >> I've got a PR in progress. Hoping to finish it up today:
>> >> https://github.com/apache/kafka/pull/8994
>> >>
>> >> Sorry for the trouble,
>> >> -John
>> >>
>> >> On Mon, Jun 29, 2020, at 09:29, Randall Hauch wrote:
>> >>> Thanks for raising this, David. I agree it makes sense to include this
>> >> fix
>> >>> in 2.6, so I've adjusted the "Fix Version(s)" field to include
>> '2.6.0'.
>> >>>
>> >>> Best regards,
>> >>>
>> >>> Randall
>> >>>
>> >>> On Mon, Jun 29, 2020 at 8:25 AM David Jacot 
>> wrote:
>> >>>
>> >>>> Hi Randall,
>> >>>>
>> >>>> We have discovered an annoying issue that we introduced in 2.5:
>> >>>>
>> >>>> Describing topics with the command line tool fails if the user does
>> not
>> >>>> have the
>> >>>> privileges to access the ListPartitionReassignments API. I believe
>> that
>> >>>> this is the
>> >>>> case for most non-admin users.
>> >>>>
>> >>>> I propose to include the fix in 2.6. The fix is trivial so low risk.
>> >> What
>> >>>> do you think?
>> >>>>
>> >>>> JIRA: https://issues.apache.org/jira/browse/KAFKA-10212
>> >>>> PR: https://github.com/apache/kafka/pull/8947
>> >>>>
>> >>>> Best,
>> >>>> David
>> >>>>
>> >>>> On Sat, Jun 27, 2020 at 4:39 AM John Roesler 
>> >> wrote:
>> >>>>
>> >>>>> Hi Randall,
>> >>>>>
>> >>>>> I neglected to notify this thread when I merged the fix for
>> >>>>> https://issues.apache.org/jira/browse/KAFKA-10185
>> >>>>> on June 19th. I'm sorry about that oversight. It is marked with
>> >>>>> a fix version of 2.6.0.
>> >>>>>
>> >>>>> On a side node, I have a fix for KAFKA-10173, which I'm merging
>> >>>>> and backporting right now.
>> >>>>>
>> >>>>> Thanks for managing the release,
>> >>>>> -John
>> >>>>>
>> >>>>> On Thu, Jun 25, 2020, at 10:23, Randall Hauch wrote:
>> >>>>>> Thanks for the update, folks!
>> >>>>>>
>> >>>>>> Based upon Jira [1], we currently have 4 issues that are considered
>> >>>>>> blockers for the 2.6.0 release and production of RCs:
>> >>>>>>
>> >>>>>>- https://issues.apache.org/jira/browse/KAFKA-10134 - High CPU
>> >>>> issue
>> >>>>>>during rebalance in Kafka consumer after upgrading to 2.5
>> >>>> (unassigned)
>> >&g

[ANNOUNCE] Apache Kafka 2.6.0

2020-08-06 Thread Randall Hauch
The Apache Kafka community is pleased to announce the release for Apache
Kafka 2.6.0

* TLSv1.3 has been enabled by default for Java 11 or newer.
* Significant performance improvements, especially when the broker has
large numbers of partitions
* Smooth scaling out of Kafka Streams applications
* Kafka Streams support for emit on change
* New metrics for better operational insight
* Kafka Connect can automatically create topics for source connectors
* Improved error reporting options for sink connectors in Kafka Connect
* New Filter and conditional SMTs in Kafka Connect
* The default value for the `client.dns.lookup` configuration is
now `use_all_dns_ips`
* Upgrade Zookeeper to 3.5.8

This release also includes other features, 74 improvements, 175 bug fixes,
plus other changes.

All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/2.6.0/RELEASE_NOTES.html


You can download the source and binary release (Scala 2.12 and 2.13) from:
https://kafka.apache.org/downloads#2.6.0

---


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream of records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 127 contributors to this release!

17hao, A. Sophie Blee-Goldman, Aakash Shah, Adam Bellemare, Agam Brahma,
Alaa Zbair, Alexandra Rodoni, Andras Katona, Andrew Olson, Andy Coates,
Aneel Nazareth, Anna Povzner, Antony Stubbs, Arjun Satish, Auston, avalsa,
Badai Aqrandista, belugabehr, Bill Bejeck, Bob Barrett, Boyang Chen, Brian
Bushree, Brian Byrne, Bruno Cadonna, Charles Feduke, Chia-Ping Tsai, Chris
Egerton, Colin Patrick McCabe, Daniel, Daniel Beskin, David Arthur, David
Jacot, David Mao, dengziming, Dezhi “Andy” Fang, Dima Reznik, Dominic
Evans, Ego, Eric Bolinger, Evelyn Bayes, Ewen Cheslack-Postava, fantayeneh,
feyman2016, Florian Hussonnois, Gardner Vickers, Greg Harris, Gunnar
Morling, Guozhang Wang, high.lee, Hossein Torabi, huxi, Ismael Juma, Jason
Gustafson, Jeff Huang, jeff kim, Jeff Widman, Jeremy Custenborder, Jiamei
Xie, jiameixie, jiao, Jim Galasyn, Joel Hamill, John Roesler, Jorge Esteban
Quilcate Otoya, José Armando García Sancio, Konstantine Karantasis, Kowshik
Prakasam, Kun Song, Lee Dongjin, Leonard Ge, Lev Zemlyanov, Levani
Kokhreidze, Liam Clarke-Hutchinson, Lucas Bradstreet, Lucent-Wong, Magnus
Edenhill, Manikumar Reddy, Mario Molina, Matthew Wong, Matthias J. Sax,
maulin-vasavada, Michael Viamari, Michal T, Mickael Maison, Mitch, Navina
Ramesh, Navinder Pal Singh Brar, nicolasguyomar, Nigel Liang, Nikolay,
Okada Haruki, Paul, Piotr Fras, Radai Rosenblatt, Rajini Sivaram, Randall
Hauch, Rens Groothuijsen, Richard Yu, Rigel Bezerra de Melo, Rob Meng,
Rohan, Ron Dagostino, Sanjana Kaundinya, Scott, Scott Hendricks, sebwills,
Shailesh Panwar, showuon, SoontaekLim, Stanislav Kozlovski, Steve
Rodrigues, Svend Vanderveken, Sönke Liebau, THREE LEVEL HELMET, Tom
Bentley, Tu V. Tran, Valeria, Vikas Singh, Viktor Somogyi, vinoth chandar,
Vito Jeng, Xavier Léauté, xiaodongdu, Zach Zhang, zhaohaidao, zshuo, 阿洋

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!


Regards,

Randall Hauch


Re: Preliminary blog post about the Apache 2.6.0 release

2020-08-05 Thread Randall Hauch
The last blog post for 2.5.0 mentioned KIP-500, and I know we've made a lot
of progress on that and will have even more in upcoming releases. I'd like
to add the following paragraph to the opening section, just after the
paragraph mentioning JDK 14 and Scala 2.13:

"Finally, these accomplishments are only one part of a larger active
roadmap in the run up to Apache Kafka 3.0, which may be one of the most
significant releases in the project’s history. The work to replace
Zookeeper (link to KIP-500) with built-in RAFT-based consensus is well
underway with eight KIPs in active development. Kafka’s new RAFT protocol
for the metadata quorum is already available for review (link to
https://github.com/apache/kafka/pull/9130). Tiered Storage unlocks infinite
scaling and faster rebalance times via KIP-405 (link to KIP-405), and is up
and running in internal clusters at Uber."

I'ved incorporated this into the preview doc, but feedback is welcome.

Best regards,

Randall

On Wed, Aug 5, 2020 at 10:27 AM Randall Hauch  wrote:

> I've prepared a preliminary blog post about the upcoming Apache Kafka
>  2.6.0 release.
> Please take a look and let me know via this thread if you want to
> add/modify details.
> Thanks to all who contributed to this blog post.
>
> Unfortunately, the preview is not currently publicly visible at
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache3 
> (I've
> logged https://issues.apache.org/jira/browse/INFRA-20646), so in the
> interest of time I created a Google doc with read privilege for everyone by
> pasting the preview content directly into:
>
>
> https://docs.google.com/document/d/1MQQoJk4ewedYgFfgSA2axx3VhRwgj_0rO1tf3yezV0c/edit?usp=sharing
>
> I will continue to update this doc with any changes to the draft blog post.
>
>
> Thanks,
> Randall
>


Re: Preliminary blog post about the Apache 2.6.0 release

2020-08-05 Thread Randall Hauch
Hi, Maulin.

We have traditionally not mentioned *every* KIP in the project's blog posts
-- unless a release just happens to involve a small number of KIPs. This
means that we do have to choose a subset of the KIPs -- AK 2.6.0 had 30
KIPs that were implemented, and a blog summarizing each would be too long
and much less widely received. I tried to choose KIPs that clearly impacted
the broadest set of users, and finding a subset wasn't that easy.

I chose not to include KIP-519 in the blog simply because it requires
installing and integration into the broker components that are not included
in the official distribution. I hope that helps explain my thought process.

Best regards,

Randall

On Wed, Aug 5, 2020 at 12:52 PM Maulin Vasavada 
wrote:

> Hi Randall
>
> One question: Do we mention all KIPs/NewFeatures in the blog that are
> listed in the release notes document -
> https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html ?
>
> I see that [KAFKA-8890 <https://issues.apache.org/jira/browse/KAFKA-8890>]
> - KIP-519: Make SSL context/engine configuration extensible is missing from
> the google doc that you shared.
>
> Thanks
> Maulin
>
>
> On Wed, Aug 5, 2020 at 9:49 AM Randall Hauch  wrote:
>
> > Thanks, Jason. We haven't done that for a few releases, but I think it's
> a
> > great idea. I've updated the blog post draft and the Google doc to
> mention
> > the 127 contributors by name that will also be mentioned in the email
> > release announcement.
> >
> > I also linked to the project's downloads page in the opening sentence,
> and
> > tweaked the wording slightly in the first paragraph.
> >
> > Best regards,
> >
> > Randall
> >
> > On Wed, Aug 5, 2020 at 11:21 AM Jason Gustafson 
> > wrote:
> >
> > > Hey Randall,
> > >
> > > Thanks for putting this together. I think it would be great if the blog
> > > included the names of the release contributors. It's an easy way to
> give
> > > some recognition. I know we have done that in the past.
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Wed, Aug 5, 2020 at 8:25 AM Randall Hauch 
> wrote:
> > >
> > > > I've prepared a preliminary blog post about the upcoming Apache Kafka
> > > 2.6.0
> > > > release.
> > > > Please take a look and let me know via this thread if you want to
> > > > add/modify details.
> > > > Thanks to all who contributed to this blog post.
> > > >
> > > > Unfortunately, the preview is not currently publicly visible at
> > > >
> > >
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache3
> > > > (I've
> > > > logged https://issues.apache.org/jira/browse/INFRA-20646), so in the
> > > > interest of time I created a Google doc with read privilege for
> > everyone
> > > by
> > > > pasting the preview content directly into:
> > > >
> > > >
> > > >
> > >
> >
> https://docs.google.com/document/d/1MQQoJk4ewedYgFfgSA2axx3VhRwgj_0rO1tf3yezV0c/edit?usp=sharing
> > > >
> > > > I will continue to update this doc with any changes to the draft blog
> > > post.
> > > >
> > > >
> > > > Thanks,
> > > > Randall
> > > >
> > >
> >
>


Re: Preliminary blog post about the Apache 2.6.0 release

2020-08-05 Thread Randall Hauch
Thanks, Jason. We haven't done that for a few releases, but I think it's a
great idea. I've updated the blog post draft and the Google doc to mention
the 127 contributors by name that will also be mentioned in the email
release announcement.

I also linked to the project's downloads page in the opening sentence, and
tweaked the wording slightly in the first paragraph.

Best regards,

Randall

On Wed, Aug 5, 2020 at 11:21 AM Jason Gustafson  wrote:

> Hey Randall,
>
> Thanks for putting this together. I think it would be great if the blog
> included the names of the release contributors. It's an easy way to give
> some recognition. I know we have done that in the past.
>
> Thanks,
> Jason
>
> On Wed, Aug 5, 2020 at 8:25 AM Randall Hauch  wrote:
>
> > I've prepared a preliminary blog post about the upcoming Apache Kafka
> 2.6.0
> > release.
> > Please take a look and let me know via this thread if you want to
> > add/modify details.
> > Thanks to all who contributed to this blog post.
> >
> > Unfortunately, the preview is not currently publicly visible at
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache3
> > (I've
> > logged https://issues.apache.org/jira/browse/INFRA-20646), so in the
> > interest of time I created a Google doc with read privilege for everyone
> by
> > pasting the preview content directly into:
> >
> >
> >
> https://docs.google.com/document/d/1MQQoJk4ewedYgFfgSA2axx3VhRwgj_0rO1tf3yezV0c/edit?usp=sharing
> >
> > I will continue to update this doc with any changes to the draft blog
> post.
> >
> >
> > Thanks,
> > Randall
> >
>


Re: Preliminary blog post about the Apache 2.6.0 release

2020-08-05 Thread Randall Hauch
Thanks, Jason. We haven't done that for a few releases, but I think it's a
great idea. I've updated the blog post draft and the Google doc to mention
the 127 contributors by name that will also be mentioned in the email
release announcement.

I also linked to the project's downloads page in the opening sentence, and
tweaked the wording slightly in the first paragraph.

Best regards,

Randall

On Wed, Aug 5, 2020 at 11:21 AM Jason Gustafson  wrote:

> Hey Randall,
>
> Thanks for putting this together. I think it would be great if the blog
> included the names of the release contributors. It's an easy way to give
> some recognition. I know we have done that in the past.
>
> Thanks,
> Jason
>
> On Wed, Aug 5, 2020 at 8:25 AM Randall Hauch  wrote:
>
> > I've prepared a preliminary blog post about the upcoming Apache Kafka
> 2.6.0
> > release.
> > Please take a look and let me know via this thread if you want to
> > add/modify details.
> > Thanks to all who contributed to this blog post.
> >
> > Unfortunately, the preview is not currently publicly visible at
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache3
> > (I've
> > logged https://issues.apache.org/jira/browse/INFRA-20646), so in the
> > interest of time I created a Google doc with read privilege for everyone
> by
> > pasting the preview content directly into:
> >
> >
> >
> https://docs.google.com/document/d/1MQQoJk4ewedYgFfgSA2axx3VhRwgj_0rO1tf3yezV0c/edit?usp=sharing
> >
> > I will continue to update this doc with any changes to the draft blog
> post.
> >
> >
> > Thanks,
> > Randall
> >
>


Preliminary blog post about the Apache 2.6.0 release

2020-08-05 Thread Randall Hauch
I've prepared a preliminary blog post about the upcoming Apache Kafka 2.6.0
release.
Please take a look and let me know via this thread if you want to
add/modify details.
Thanks to all who contributed to this blog post.

Unfortunately, the preview is not currently publicly visible at
https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache3
(I've
logged https://issues.apache.org/jira/browse/INFRA-20646), so in the
interest of time I created a Google doc with read privilege for everyone by
pasting the preview content directly into:

https://docs.google.com/document/d/1MQQoJk4ewedYgFfgSA2axx3VhRwgj_0rO1tf3yezV0c/edit?usp=sharing

I will continue to update this doc with any changes to the draft blog post.


Thanks,
Randall


[jira] [Resolved] (KAFKA-10341) Add version 2.6 to streams and systems tests

2020-08-04 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10341.
---
  Reviewer: Matthias J. Sax
Resolution: Fixed

Merged the PR to the `trunk` branch, and did not backport.

> Add version 2.6 to streams and systems tests
> 
>
> Key: KAFKA-10341
> URL: https://issues.apache.org/jira/browse/KAFKA-10341
> Project: Kafka
>  Issue Type: Task
>  Components: build, streams, system tests
>Affects Versions: 2.7.0
>    Reporter: Randall Hauch
>    Assignee: Randall Hauch
>Priority: Major
> Fix For: 2.7.0
>
>
> Part of the [2.6.0 release 
> process|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-AnnouncetheRC].
>  This will be merged only to `trunk` for inclusion in 2.7.0
> See KAFKA-9779 for the changes made for the 2.5 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.6.0 RC2

2020-08-04 Thread Randall Hauch
I wanted to follow up on two non-blocking issues Gwen mentioned earlier. I
didn't find any similar items, so I logged the following:

https://issues.apache.org/jira/browse/KAFKA-10358 - Remove the 2.12 sitedocs
https://issues.apache.org/jira/browse/KAFKA-10359 - AgentTest unit test
failure during verification build of AK 2.6.0 RC2

Best regards,

Randall

On Sat, Aug 1, 2020 at 12:16 AM Gwen Shapira  wrote:

> Thank you, Randall for driving this release.
>
> +1 (binding) after verifying signatures and hashes, building from sources,
> running unit/integration tests and some manual tests with the 2.13 build.
>
> Two minor things:
> 1. There were two sitedoc files - 2.12 and 2.13, we don't really need two
> sitedocs generated. Not a big deal, but maybe worth tracking and fixing.
> 2. I got one test failure locally:
>
> org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus failed, log
> available in
>
> /Users/gwenshap/releases/2.6.0-rc2/kafka-2.6.0-src/tools/build/reports/testOutput/org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus.test.stdout
>
> org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus FAILED
> java.lang.RuntimeException:
> at
>
> org.apache.kafka.trogdor.rest.RestExceptionMapper.toException(RestExceptionMapper.java:69)
> at
>
> org.apache.kafka.trogdor.rest.JsonRestServer$HttpResponse.body(JsonRestServer.java:285)
> at
> org.apache.kafka.trogdor.agent.AgentClient.status(AgentClient.java:130)
> at
>
> org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus(AgentTest.java:115)
>
> Gwen
>
> On Tue, Jul 28, 2020 at 2:50 PM Randall Hauch  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 2.6.0. This is a
> > major release that includes many new features, including:
> >
> > * TLSv1.3 has been enabled by default for Java 11 or newer.
> > * Smooth scaling out of Kafka Streams applications
> > * Kafka Streams support for emit on change
> > * New metrics for better operational insight
> > * Kafka Connect can automatically create topics for source connectors
> > * Improved error reporting options for sink connectors in Kafka Connect
> > * New Filter and conditional SMTs in Kafka Connect
> > * The default value for the `client.dns.lookup` configuration is
> > now `use_all_dns_ips`
> > * Upgrade Zookeeper to 3.5.8
> >
> > This release also includes a few other features, 74 improvements, 175 bug
> > fixes, plus other fixes.
> >
> > Release notes for the 2.6.0 release:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, August 3, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.6.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/26/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/26/protocol.html
> >
> > * Successful Jenkins builds for the 2.6 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.6-jdk8/101/
> > System tests: (link to follow)
> >
> >
> > Thanks,
> > Randall Hauch
> >
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


[jira] [Created] (KAFKA-10359) Test failure during verification build of AK 2.6.0 RC2

2020-08-04 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10359:
-

 Summary: Test failure during verification build of AK 2.6.0 RC2
 Key: KAFKA-10359
 URL: https://issues.apache.org/jira/browse/KAFKA-10359
 Project: Kafka
  Issue Type: Bug
  Components: tools, unit tests
Affects Versions: 2.6.0
Reporter: Randall Hauch


The following error was reported by [~gshapira_impala_35cc] when she was 
verifying AK 2.6.0 RC2:
{noformat}
org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus failed, log
available in
/Users/gwenshap/releases/2.6.0-rc2/kafka-2.6.0-src/tools/build/reports/testOutput/org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus.test.stdout

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus FAILED
java.lang.RuntimeException:
at 
org.apache.kafka.trogdor.rest.RestExceptionMapper.toException(RestExceptionMapper.java:69)
at
org.apache.kafka.trogdor.rest.JsonRestServer$HttpResponse.body(JsonRestServer.java:285)
at
org.apache.kafka.trogdor.agent.AgentClient.status(AgentClient.java:130)
at
org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus(AgentTest.java:115)
{noformat}

No similar issue appears to have been previously reported, and this did not 
occur in the Jenkins builds for 2.6.0 RC2 nor was this reported by anyone else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10358) Remove the 2.12 sitedocs

2020-08-04 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10358:
-

 Summary: Remove the 2.12 sitedocs
 Key: KAFKA-10358
 URL: https://issues.apache.org/jira/browse/KAFKA-10358
 Project: Kafka
  Issue Type: Task
  Components: release, build
Affects Versions: 2.7.0
Reporter: Randall Hauch


Per [~gshapira_impala_35cc]'s comment during the [AK 2.6.0 RC2 
vote|https://lists.apache.org/thread.html/rc8a3aa6986204adbb9ff326b8de849b3c8bac5b6b2b436e8143afea9%40%3Cdev.kafka.apache.org%3E]:
{quote}
There were two sitedoc files - 2.12 and 2.13, we don't really need two
sitedocs generated. Not a big deal, but maybe worth tracking and fixing.
{quote}

During the release, we're publishing site-docs for both 2.12 and 2.13, but we 
really don't need both. For example, in AK 2.6.0 we published:
{noformat}
...
kafka_2.12-2.6.0-site-docs.tgz
kafka_2.12-2.6.0-site-docs.tgz.asc
kafka_2.12-2.6.0-site-docs.tgz.md5
kafka_2.12-2.6.0-site-docs.tgz.sha1
kafka_2.12-2.6.0-site-docs.tgz.sha512
...
kafka_2.13-2.6.0-site-docs.tgz
kafka_2.13-2.6.0-site-docs.tgz.asc
kafka_2.13-2.6.0-site-docs.tgz.md5
kafka_2.13-2.6.0-site-docs.tgz.sha1
kafka_2.13-2.6.0-site-docs.tgz.sha512
{noformat}

Ideally we would change the build to avoid producing the site-docs for both 
Scala versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10341) Add version 2.6 to streams and systems tests

2020-08-03 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10341:
-

 Summary: Add version 2.6 to streams and systems tests
 Key: KAFKA-10341
 URL: https://issues.apache.org/jira/browse/KAFKA-10341
 Project: Kafka
  Issue Type: Task
  Components: build, streams, system tests
Affects Versions: 2.7.0
Reporter: Randall Hauch
Assignee: Randall Hauch


Part of the [2.6.0 release 
process|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-AnnouncetheRC].

See KAFKA-9779 for the changes made for the 2.5 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[RESULTS] [VOTE] Release Kafka version 2.6.0

2020-08-03 Thread Randall Hauch
This vote passes with four +1 votes (3 bindings) and no 0 or -1 votes.

+1 votes
PMC Members (in voting order):
* Rajini Sivaram
* Gwen Shapira
* Ismael Juma

Committers (in voting order):
* Bill Bejeck
* Randall Hauch

Community:
* No votes

0 votes
* No votes

-1 votes
* No votes

Vote thread:
https://lists.apache.org/thread.html/rc8a3aa6986204adbb9ff326b8de849b3c8bac5b6b2b436e8143afea9%40%3Cdev.kafka.apache.org%3E

I'll continue with the release process and the release announcement will
follow in the next few days.

Randall Hauch


Re: [VOTE] 2.6.0 RC2

2020-08-03 Thread Randall Hauch
+1 (non-binding)

I'm closing the vote since this has met the release criteria.

Randall

On Mon, Aug 3, 2020 at 2:57 AM Ismael Juma  wrote:

> +1 (binding), verified signatures, ran the tests on the source archive with
> Scala 2.13 and Java 14 and verified the quickstart with the source archive
> and Scala 2.13 binary archive.
>
> Thanks,
> Ismael
>
> On Tue, Jul 28, 2020, 2:52 PM Randall Hauch  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 2.6.0. This is a
> > major release that includes many new features, including:
> >
> > * TLSv1.3 has been enabled by default for Java 11 or newer.
> > * Smooth scaling out of Kafka Streams applications
> > * Kafka Streams support for emit on change
> > * New metrics for better operational insight
> > * Kafka Connect can automatically create topics for source connectors
> > * Improved error reporting options for sink connectors in Kafka Connect
> > * New Filter and conditional SMTs in Kafka Connect
> > * The default value for the `client.dns.lookup` configuration is
> > now `use_all_dns_ips`
> > * Upgrade Zookeeper to 3.5.8
> >
> > This release also includes a few other features, 74 improvements, 175 bug
> > fixes, plus other fixes.
> >
> > Release notes for the 2.6.0 release:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, August 3, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.6.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/26/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/26/protocol.html
> >
> > * Successful Jenkins builds for the 2.6 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.6-jdk8/101/
> > System tests: (link to follow)
> >
> >
> > Thanks,
> > Randall Hauch
> >
>


Re: [VOTE] 2.6.0 RC2

2020-08-01 Thread Randall Hauch
We finally have a green system test using the same commit used for RC2:
https://jenkins.confluent.io/job/system-test-kafka/job/2.6/53/

Thanks to Rajini and Gwen for their testing, verification, and binding +1s.
Thanks also to Bill for is testing and non-binding +1.

That means we're looking for at least one more binding +1 vote from PMC
members. Please download, test and vote by Monday, August 3, 9am PT,

Best regards!

Randall

On Fri, Jul 31, 2020 at 11:44 AM Randall Hauch  wrote:

> Thanks, Rajini.
>
> Here's an update on the system tests. Unfortunately we've not yet had a
> fully-green system test run, but each of the system test runs since
> https://jenkins.confluent.io/job/system-test-kafka/job/2.6/49/ has had
> just one or two failures -- and no failure has been repeated. This suggests
> the failing tests appear to be somewhat flaky. I'll keep running more
> system tests and will reply here if something appears suspicious, but
> please holler if you think my analysis is incorrect.
>
> Best regards,
>
> Randall
>
> On Fri, Jul 31, 2020 at 11:00 AM Rajini Sivaram 
> wrote:
>
>> Thanks Randall, +1 (binding)
>>
>> Built from source and ran tests, had a quick look through some Javadoc
>> changes, ran quickstart and some tests with Java 11 TLSv1.3 on the binary.
>>
>> Regards,
>>
>> Rajini
>>
>>
>> On Tue, Jul 28, 2020 at 10:50 PM Randall Hauch  wrote:
>>
>> > Hello Kafka users, developers and client-developers,
>> >
>> > This is the third candidate for release of Apache Kafka 2.6.0. This is a
>> > major release that includes many new features, including:
>> >
>> > * TLSv1.3 has been enabled by default for Java 11 or newer.
>> > * Smooth scaling out of Kafka Streams applications
>> > * Kafka Streams support for emit on change
>> > * New metrics for better operational insight
>> > * Kafka Connect can automatically create topics for source connectors
>> > * Improved error reporting options for sink connectors in Kafka Connect
>> > * New Filter and conditional SMTs in Kafka Connect
>> > * The default value for the `client.dns.lookup` configuration is
>> > now `use_all_dns_ips`
>> > * Upgrade Zookeeper to 3.5.8
>> >
>> > This release also includes a few other features, 74 improvements, 175
>> bug
>> > fixes, plus other fixes.
>> >
>> > Release notes for the 2.6.0 release:
>> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html
>> >
>> > *** Please download, test and vote by Monday, August 3, 9am PT
>> >
>> > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > https://kafka.apache.org/KEYS
>> >
>> > * Release artifacts to be voted upon (source and binary):
>> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/
>> >
>> > * Maven artifacts to be voted upon:
>> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> >
>> > * Javadoc:
>> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/javadoc/
>> >
>> > * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
>> > https://github.com/apache/kafka/releases/tag/2.6.0-rc2
>> >
>> > * Documentation:
>> > https://kafka.apache.org/26/documentation.html
>> >
>> > * Protocol:
>> > https://kafka.apache.org/26/protocol.html
>> >
>> > * Successful Jenkins builds for the 2.6 branch:
>> > Unit/integration tests:
>> https://builds.apache.org/job/kafka-2.6-jdk8/101/
>> > System tests: (link to follow)
>> >
>> >
>> > Thanks,
>> > Randall Hauch
>> >
>>
>


Re: [VOTE] 2.6.0 RC2

2020-08-01 Thread Randall Hauch
Thanks, Gwen.

I'll log an issue to remove the kafka_2.12-2.6.0-site-docs.* files, and
look into whether AgentTest.testAgentGetStatus is flaky and if so log an
issue.

Randall

On Sat, Aug 1, 2020 at 12:16 AM Gwen Shapira  wrote:

> Thank you, Randall for driving this release.
>
> +1 (binding) after verifying signatures and hashes, building from sources,
> running unit/integration tests and some manual tests with the 2.13 build.
>
> Two minor things:
> 1. There were two sitedoc files - 2.12 and 2.13, we don't really need two
> sitedocs generated. Not a big deal, but maybe worth tracking and fixing.
> 2. I got one test failure locally:
>
> org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus failed, log
> available in
>
> /Users/gwenshap/releases/2.6.0-rc2/kafka-2.6.0-src/tools/build/reports/testOutput/org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus.test.stdout
>
> org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus FAILED
> java.lang.RuntimeException:
> at
>
> org.apache.kafka.trogdor.rest.RestExceptionMapper.toException(RestExceptionMapper.java:69)
> at
>
> org.apache.kafka.trogdor.rest.JsonRestServer$HttpResponse.body(JsonRestServer.java:285)
> at
> org.apache.kafka.trogdor.agent.AgentClient.status(AgentClient.java:130)
> at
>
> org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus(AgentTest.java:115)
>
> Gwen
>
> On Tue, Jul 28, 2020 at 2:50 PM Randall Hauch  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 2.6.0. This is a
> > major release that includes many new features, including:
> >
> > * TLSv1.3 has been enabled by default for Java 11 or newer.
> > * Smooth scaling out of Kafka Streams applications
> > * Kafka Streams support for emit on change
> > * New metrics for better operational insight
> > * Kafka Connect can automatically create topics for source connectors
> > * Improved error reporting options for sink connectors in Kafka Connect
> > * New Filter and conditional SMTs in Kafka Connect
> > * The default value for the `client.dns.lookup` configuration is
> > now `use_all_dns_ips`
> > * Upgrade Zookeeper to 3.5.8
> >
> > This release also includes a few other features, 74 improvements, 175 bug
> > fixes, plus other fixes.
> >
> > Release notes for the 2.6.0 release:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, August 3, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.6.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/26/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/26/protocol.html
> >
> > * Successful Jenkins builds for the 2.6 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.6-jdk8/101/
> > System tests: (link to follow)
> >
> >
> > Thanks,
> > Randall Hauch
> >
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [VOTE] 2.6.0 RC2

2020-07-31 Thread Randall Hauch
Thanks, Rajini.

Here's an update on the system tests. Unfortunately we've not yet had a
fully-green system test run, but each of the system test runs since
https://jenkins.confluent.io/job/system-test-kafka/job/2.6/49/ has had just
one or two failures -- and no failure has been repeated. This suggests the
failing tests appear to be somewhat flaky. I'll keep running more system
tests and will reply here if something appears suspicious, but please
holler if you think my analysis is incorrect.

Best regards,

Randall

On Fri, Jul 31, 2020 at 11:00 AM Rajini Sivaram 
wrote:

> Thanks Randall, +1 (binding)
>
> Built from source and ran tests, had a quick look through some Javadoc
> changes, ran quickstart and some tests with Java 11 TLSv1.3 on the binary.
>
> Regards,
>
> Rajini
>
>
> On Tue, Jul 28, 2020 at 10:50 PM Randall Hauch  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 2.6.0. This is a
> > major release that includes many new features, including:
> >
> > * TLSv1.3 has been enabled by default for Java 11 or newer.
> > * Smooth scaling out of Kafka Streams applications
> > * Kafka Streams support for emit on change
> > * New metrics for better operational insight
> > * Kafka Connect can automatically create topics for source connectors
> > * Improved error reporting options for sink connectors in Kafka Connect
> > * New Filter and conditional SMTs in Kafka Connect
> > * The default value for the `client.dns.lookup` configuration is
> > now `use_all_dns_ips`
> > * Upgrade Zookeeper to 3.5.8
> >
> > This release also includes a few other features, 74 improvements, 175 bug
> > fixes, plus other fixes.
> >
> > Release notes for the 2.6.0 release:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, August 3, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~rhauch/kafka-2.6.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.6.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/26/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/26/protocol.html
> >
> > * Successful Jenkins builds for the 2.6 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.6-jdk8/101/
> > System tests: (link to follow)
> >
> >
> > Thanks,
> > Randall Hauch
> >
>


[jira] [Created] (KAFKA-10329) Enable connector context in logs by default

2020-07-30 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10329:
-

 Summary: Enable connector context in logs by default
 Key: KAFKA-10329
 URL: https://issues.apache.org/jira/browse/KAFKA-10329
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 3.0.0
Reporter: Randall Hauch
 Fix For: 3.0.0


When 
[KIP-449|https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs]
 was implemented and released as part of AK 2.3, we chose to not enable these 
extra logging context information by default because it was not backward 
compatible, and anyone relying upon the `connect-log4j.properties` file 
provided by the AK distribution would after an upgrade to AK 2.3 (or later) see 
different formats for their logs, which could break any log processing 
functionality they were relying upon.

However, we should enable this in AK 3.0, whenever that comes. Doing so will 
require a fairly minor KIP to change the `connect-log4j.properties` file 
slightly.

Marked this as BLOCKER since it's a backward incompatible change that we 
definitely want to do in the 3.0.0 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.6.0 RC1

2020-07-28 Thread Randall Hauch
I've announced RC2 on a different thread entitled "[VOTE] 2.6.0 RC2" (see
https://lists.apache.org/thread.html/rc8a3aa6986204adbb9ff326b8de849b3c8bac5b6b2b436e8143afea9%40%3Cdev.kafka.apache.org%3E).
Please use that thread to highlight any blockers with that release
candidate.

Best regards,

Randall

On Mon, Jul 27, 2020 at 12:55 PM Randall Hauch  wrote:

> Thanks, John. Looks like we're still trying to get a green build for
> https://github.com/apache/kafka/pull/9066.
>
> On Fri, Jul 24, 2020 at 3:46 PM John Roesler  wrote:
>
>> Hi Randall,
>>
>> I'm sorry to say we have also identified that this flaky test
>> failure turned out to be a real blocker bug:
>> https://issues.apache.org/jira/browse/KAFKA-10287
>>
>> There is a PR in progress.
>>
>> Thanks,
>> -John
>>
>> On Fri, Jul 24, 2020, at 12:26, Matthias J. Sax wrote:
>> > We found a regression bug that seems to be a blocker:
>> > https://issues.apache.org/jira/browse/KAFKA-10306
>> >
>> > Will work on a PR today.
>> >
>> >
>> > -Matthias
>> >
>> > On 7/22/20 9:40 AM, Randall Hauch wrote:
>> > > Any thoughts, Rajini?
>> > >
>> > > On Mon, Jul 20, 2020 at 9:55 PM Randall Hauch 
>> wrote:
>> > >
>> > >>
>> > >> When I was checking the documentation for RC1 after the tag was
>> pushed, I
>> > >> noticed that the fix Rajini mentioned in the RC0 vote thread (
>> > >> https://github.com/apache/kafka/pull/8979
>> > >> <
>> https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21
>> >)
>> > >> and merged to the `2.6` branch includes the following comment about
>> being
>> > >> deprecated in 2.7:
>> > >>
>> https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21
>> > >> .
>> > >>
>> > >> Rajini, can you please check the commits merged to the `2.6` do not
>> have
>> > >> the reference to 2.7? Since these are JavaDocs, I'm assuming that
>> we'll
>> > >> need to cut RC2.
>> > >>
>> > >> But it'd be good for everyone else to double check this release.
>> > >>
>> > >> Best regards,
>> > >>
>> > >> Randall Hauch
>> > >>
>> > >> On Mon, Jul 20, 2020 at 9:50 PM Randall Hauch 
>> wrote:
>> > >>
>> > >>> Hello Kafka users, developers and client-developers,
>> > >>>
>> > >>> This is the second candidate for release of Apache Kafka 2.6.0.
>> This is a
>> > >>> major release that includes many new features, including:
>> > >>>
>> > >>> * TLSv1.3 has been enabled by default for Java 11 or newer.
>> > >>> * Smooth scaling out of Kafka Streams applications
>> > >>> * Kafka Streams support for emit on change
>> > >>> * New metrics for better operational insight
>> > >>> * Kafka Connect can automatically create topics for source
>> connectors
>> > >>> * Improved error reporting options for sink connectors in Kafka
>> Connect
>> > >>> * New Filter and conditional SMTs in Kafka Connect
>> > >>> * The default value for the `client.dns.lookup` configuration is
>> > >>> now `use_all_dns_ips`
>> > >>> * Upgrade Zookeeper to 3.5.8
>> > >>>
>> > >>> This release also includes a few other features, 76 improvements,
>> and 165
>> > >>> bug fixes.
>> > >>>
>> > >>> Release notes for the 2.6.0 release:
>> > >>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/RELEASE_NOTES.html
>> > >>>
>> > >>> *** Please download, test and vote by Monday, July 20, 9am PT
>> > >>>
>> > >>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> > >>> https://kafka.apache.org/KEYS
>> > >>>
>> > >>> * Release artifacts to be voted upon (source and binary):
>> > >>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/
>> > >>>
>> > >>> * Maven artifacts to be voted upon:
>> > >>>
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> > >>>
>> > >>> * Javadoc:
>> > >>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/javadoc/
>> > >>>
>> > >>> * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
>> > >>> https://github.com/apache/kafka/releases/tag/2.6.0-rc1
>> > >>>
>> > >>> * Documentation:
>> > >>> https://kafka.apache.org/26/documentation.html
>> > >>>
>> > >>> * Protocol:
>> > >>> https://kafka.apache.org/26/protocol.html
>> > >>>
>> > >>> * Successful Jenkins builds for the 2.6 branch:
>> > >>> Unit/integration tests:
>> https://builds.apache.org/job/kafka-2.6-jdk8/91/ (one
>> > >>> flaky test)
>> > >>> System tests: (link to follow)
>> > >>>
>> > >>> Thanks,
>> > >>> Randall Hauch
>> > >>>
>> > >>
>> > >
>> >
>> >
>> > Attachments:
>> > * signature.asc
>>
>


[VOTE] 2.6.0 RC2

2020-07-28 Thread Randall Hauch
Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 2.6.0. This is a
major release that includes many new features, including:

* TLSv1.3 has been enabled by default for Java 11 or newer.
* Smooth scaling out of Kafka Streams applications
* Kafka Streams support for emit on change
* New metrics for better operational insight
* Kafka Connect can automatically create topics for source connectors
* Improved error reporting options for sink connectors in Kafka Connect
* New Filter and conditional SMTs in Kafka Connect
* The default value for the `client.dns.lookup` configuration is
now `use_all_dns_ips`
* Upgrade Zookeeper to 3.5.8

This release also includes a few other features, 74 improvements, 175 bug
fixes, plus other fixes.

Release notes for the 2.6.0 release:
https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Monday, August 3, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~rhauch/kafka-2.6.0-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~rhauch/kafka-2.6.0-rc2/javadoc/

* Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
https://github.com/apache/kafka/releases/tag/2.6.0-rc2

* Documentation:
https://kafka.apache.org/26/documentation.html

* Protocol:
https://kafka.apache.org/26/protocol.html

* Successful Jenkins builds for the 2.6 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/101/
System tests: (link to follow)


Thanks,
Randall Hauch


Re: [VOTE] 2.6.0 RC1

2020-07-27 Thread Randall Hauch
Thanks, John. Looks like we're still trying to get a green build for
https://github.com/apache/kafka/pull/9066.

On Fri, Jul 24, 2020 at 3:46 PM John Roesler  wrote:

> Hi Randall,
>
> I'm sorry to say we have also identified that this flaky test
> failure turned out to be a real blocker bug:
> https://issues.apache.org/jira/browse/KAFKA-10287
>
> There is a PR in progress.
>
> Thanks,
> -John
>
> On Fri, Jul 24, 2020, at 12:26, Matthias J. Sax wrote:
> > We found a regression bug that seems to be a blocker:
> > https://issues.apache.org/jira/browse/KAFKA-10306
> >
> > Will work on a PR today.
> >
> >
> > -Matthias
> >
> > On 7/22/20 9:40 AM, Randall Hauch wrote:
> > > Any thoughts, Rajini?
> > >
> > > On Mon, Jul 20, 2020 at 9:55 PM Randall Hauch 
> wrote:
> > >
> > >>
> > >> When I was checking the documentation for RC1 after the tag was
> pushed, I
> > >> noticed that the fix Rajini mentioned in the RC0 vote thread (
> > >> https://github.com/apache/kafka/pull/8979
> > >> <
> https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21
> >)
> > >> and merged to the `2.6` branch includes the following comment about
> being
> > >> deprecated in 2.7:
> > >>
> https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21
> > >> .
> > >>
> > >> Rajini, can you please check the commits merged to the `2.6` do not
> have
> > >> the reference to 2.7? Since these are JavaDocs, I'm assuming that
> we'll
> > >> need to cut RC2.
> > >>
> > >> But it'd be good for everyone else to double check this release.
> > >>
> > >> Best regards,
> > >>
> > >> Randall Hauch
> > >>
> > >> On Mon, Jul 20, 2020 at 9:50 PM Randall Hauch 
> wrote:
> > >>
> > >>> Hello Kafka users, developers and client-developers,
> > >>>
> > >>> This is the second candidate for release of Apache Kafka 2.6.0. This
> is a
> > >>> major release that includes many new features, including:
> > >>>
> > >>> * TLSv1.3 has been enabled by default for Java 11 or newer.
> > >>> * Smooth scaling out of Kafka Streams applications
> > >>> * Kafka Streams support for emit on change
> > >>> * New metrics for better operational insight
> > >>> * Kafka Connect can automatically create topics for source connectors
> > >>> * Improved error reporting options for sink connectors in Kafka
> Connect
> > >>> * New Filter and conditional SMTs in Kafka Connect
> > >>> * The default value for the `client.dns.lookup` configuration is
> > >>> now `use_all_dns_ips`
> > >>> * Upgrade Zookeeper to 3.5.8
> > >>>
> > >>> This release also includes a few other features, 76 improvements,
> and 165
> > >>> bug fixes.
> > >>>
> > >>> Release notes for the 2.6.0 release:
> > >>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/RELEASE_NOTES.html
> > >>>
> > >>> *** Please download, test and vote by Monday, July 20, 9am PT
> > >>>
> > >>> Kafka's KEYS file containing PGP keys we use to sign the release:
> > >>> https://kafka.apache.org/KEYS
> > >>>
> > >>> * Release artifacts to be voted upon (source and binary):
> > >>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/
> > >>>
> > >>> * Maven artifacts to be voted upon:
> > >>>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >>>
> > >>> * Javadoc:
> > >>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/javadoc/
> > >>>
> > >>> * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
> > >>> https://github.com/apache/kafka/releases/tag/2.6.0-rc1
> > >>>
> > >>> * Documentation:
> > >>> https://kafka.apache.org/26/documentation.html
> > >>>
> > >>> * Protocol:
> > >>> https://kafka.apache.org/26/protocol.html
> > >>>
> > >>> * Successful Jenkins builds for the 2.6 branch:
> > >>> Unit/integration tests:
> https://builds.apache.org/job/kafka-2.6-jdk8/91/ (one
> > >>> flaky test)
> > >>> System tests: (link to follow)
> > >>>
> > >>> Thanks,
> > >>> Randall Hauch
> > >>>
> > >>
> > >
> >
> >
> > Attachments:
> > * signature.asc
>


Re: [VOTE] 2.6.0 RC1

2020-07-22 Thread Randall Hauch
Any thoughts, Rajini?

On Mon, Jul 20, 2020 at 9:55 PM Randall Hauch  wrote:

>
> When I was checking the documentation for RC1 after the tag was pushed, I
> noticed that the fix Rajini mentioned in the RC0 vote thread (
> https://github.com/apache/kafka/pull/8979
> <https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21>)
> and merged to the `2.6` branch includes the following comment about being
> deprecated in 2.7:
> https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21
> .
>
> Rajini, can you please check the commits merged to the `2.6` do not have
> the reference to 2.7? Since these are JavaDocs, I'm assuming that we'll
> need to cut RC2.
>
> But it'd be good for everyone else to double check this release.
>
> Best regards,
>
> Randall Hauch
>
> On Mon, Jul 20, 2020 at 9:50 PM Randall Hauch  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the second candidate for release of Apache Kafka 2.6.0. This is a
>> major release that includes many new features, including:
>>
>> * TLSv1.3 has been enabled by default for Java 11 or newer.
>> * Smooth scaling out of Kafka Streams applications
>> * Kafka Streams support for emit on change
>> * New metrics for better operational insight
>> * Kafka Connect can automatically create topics for source connectors
>> * Improved error reporting options for sink connectors in Kafka Connect
>> * New Filter and conditional SMTs in Kafka Connect
>> * The default value for the `client.dns.lookup` configuration is
>> now `use_all_dns_ips`
>> * Upgrade Zookeeper to 3.5.8
>>
>> This release also includes a few other features, 76 improvements, and 165
>> bug fixes.
>>
>> Release notes for the 2.6.0 release:
>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Monday, July 20, 9am PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> https://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>
>> * Javadoc:
>> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/javadoc/
>>
>> * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
>> https://github.com/apache/kafka/releases/tag/2.6.0-rc1
>>
>> * Documentation:
>> https://kafka.apache.org/26/documentation.html
>>
>> * Protocol:
>> https://kafka.apache.org/26/protocol.html
>>
>> * Successful Jenkins builds for the 2.6 branch:
>> Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/91/ (one
>> flaky test)
>> System tests: (link to follow)
>>
>> Thanks,
>> Randall Hauch
>>
>


Re: [VOTE] 2.6.0 RC0

2020-07-20 Thread Randall Hauch
Thanks, Rajini. I've cut RC1, but after I pushed the commit I found a
discrepancy in the JavaDoc that mentions 2.7. Please see the "[VOTE] 2.6.0
RC1" discussion thread for details.

On Mon, Jul 20, 2020 at 4:59 AM Rajini Sivaram 
wrote:

> Thanks Randall. We didn't create a JIRA for the KIP-546 security fix since
> KAFKA-7740 that introduced the regression had a fix version of 2.5.0. A fix
> was applied without a JIRA to follow the security bug fix process. In the
> end, it turned out that KAFKA-7740 was only in 2.6, so the bug hadn't got
> into a release anyway.
>
> Regards,
>
> Rajini
>
>
> On Sun, Jul 19, 2020 at 2:46 PM Randall Hauch  wrote:
>
> > Thanks, Rajini. Is there a Jira issue for the fix related to KIP-546? If
> > so, please make sure the Fix Version(s) include `2.6.0`.
> >
> > I'm going to start RC1 later today and hope to get it published by
> Monday.
> > In the meantime, if anyone finds anything else in RC0, please raise it
> here
> > -- if it's after RC1 is published then we'll just cut another RC with any
> > fixes.
> >
> > We're down to just 5 system test failures [1], and folks are actively
> > working to address them. At least some are known to be flaky, but we
> still
> > want to get them fixed.
> >
> > Best regards,
> >
> > Randall
> >
> > On Sun, Jul 19, 2020 at 5:45 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Randall,
> > >
> > > Ron found an issue with the quota implementation added under KIP-546,
> > which
> > > is a blocking issue for 2.6.0 since it leaks SCRAM credentials in quota
> > > responses. A fix has been merged into 2.6 branch in the commit
> > >
> > >
> >
> https://github.com/apache/kafka/commit/dd71437de7675d92ad3e4ed01ac3ee11bf5da99d
> > > .
> > > We
> > > have also merged the fix for
> > > https://issues.apache.org/jira/browse/KAFKA-10223 into 2.6 branch
> since
> > it
> > > causes issues for non-Java clients during reassignments.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Wed, Jul 15, 2020 at 11:41 PM Randall Hauch 
> wrote:
> > >
> > > > Thanks, Levani.
> > > >
> > > > The content of
> > > >
> > > >
> > >
> >
> https://home.apache.org/~rhauch/kafka-2.6.0-rc0/kafka_2.12-2.6.0-site-docs.tgz
> > > > is the correct generated site. Somehow I messed coping that to the
> > > > https://github.com/apache/kafka-site/tree/asf-site/26 directory.
> I've
> > > > corrected the latter so that
> > https://kafka.apache.org/26/documentation/
> > > > now
> > > > exactly matches that documentation in RC0.
> > > >
> > > > Best regards,
> > > >
> > > > Randall
> > > >
> > > > On Wed, Jul 15, 2020 at 1:25 AM Levani Kokhreidze <
> > > levani.co...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Randall,
> > > > >
> > > > > Not sure if it’s intentional but, documentation for Kafka Streams
> > 2.6.0
> > > > > also contains “Streams API changes in 2.7.0”
> > > > > https://kafka.apache.org/26/documentation/streams/upgrade-guide <
> > > > > https://kafka.apache.org/26/documentation/streams/upgrade-guide>
> > > > >
> > > > > Also, there seems to be some formatting issue in 2.6.0 section.
> > > > >
> > > > > Levani
> > > > >
> > > > >
> > > > > > On Jul 15, 2020, at 1:48 AM, Randall Hauch 
> > wrote:
> > > > > >
> > > > > > Thanks for catching that, Gary. Apologies to all for announcing
> > this
> > > > > before
> > > > > > pushing the docs, but that's fixed and the following links are
> > > working
> > > > > > (along with the others in my email):
> > > > > >
> > > > > > * https://kafka.apache.org/26/documentation.html
> > > > > > * https://kafka.apache.org/26/protocol.html
> > > > > >
> > > > > > Randall
> > > > > >
> > > > > > On Tue, Jul 14, 2020 at 4:30 PM Gary Russell <
> gruss...@vmware.com>
> > > > > wrote:
> > > > > >
> > > > > >> Docs link [1] is broken.
> > > > > >>
> > > > > >> [1] https://kafka.apache.org/26/documentation.html
> > > > > >>
> > > > > >>
> > > > >
> > > > >
> > > >
> > >
> >
>


[VOTE] 2.6.0 RC1

2020-07-20 Thread Randall Hauch
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 2.6.0. This is a
major release that includes many new features, including:

* TLSv1.3 has been enabled by default for Java 11 or newer.
* Smooth scaling out of Kafka Streams applications
* Kafka Streams support for emit on change
* New metrics for better operational insight
* Kafka Connect can automatically create topics for source connectors
* Improved error reporting options for sink connectors in Kafka Connect
* New Filter and conditional SMTs in Kafka Connect
* The default value for the `client.dns.lookup` configuration is
now `use_all_dns_ips`
* Upgrade Zookeeper to 3.5.8

This release also includes a few other features, 76 improvements, and 165
bug fixes.

Release notes for the 2.6.0 release:
https://home.apache.org/~rhauch/kafka-2.6.0-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Monday, July 20, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~rhauch/kafka-2.6.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~rhauch/kafka-2.6.0-rc1/javadoc/

* Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
https://github.com/apache/kafka/releases/tag/2.6.0-rc1

* Documentation:
https://kafka.apache.org/26/documentation.html

* Protocol:
https://kafka.apache.org/26/protocol.html

* Successful Jenkins builds for the 2.6 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/91/ (one
flaky test)
System tests: (link to follow)

Thanks,
Randall Hauch


Re: [VOTE] 2.6.0 RC0

2020-07-20 Thread Randall Hauch
All, I've pushed a new release candidate (RC1) and announced via the
"[VOTE] 2.6.0 RC1" discussion thread.

Randall

On Mon, Jul 20, 2020 at 9:51 PM Randall Hauch  wrote:

> Thanks, Rajini. I've cut RC1, but after I pushed the commit I found a
> discrepancy in the JavaDoc that mentions 2.7. Please see the "[VOTE] 2.6.0
> RC1" discussion thread for details.
>
> On Mon, Jul 20, 2020 at 4:59 AM Rajini Sivaram 
> wrote:
>
>> Thanks Randall. We didn't create a JIRA for the KIP-546 security fix since
>> KAFKA-7740 that introduced the regression had a fix version of 2.5.0. A
>> fix
>> was applied without a JIRA to follow the security bug fix process. In the
>> end, it turned out that KAFKA-7740 was only in 2.6, so the bug hadn't got
>> into a release anyway.
>>
>> Regards,
>>
>> Rajini
>>
>>
>> On Sun, Jul 19, 2020 at 2:46 PM Randall Hauch  wrote:
>>
>> > Thanks, Rajini. Is there a Jira issue for the fix related to KIP-546? If
>> > so, please make sure the Fix Version(s) include `2.6.0`.
>> >
>> > I'm going to start RC1 later today and hope to get it published by
>> Monday.
>> > In the meantime, if anyone finds anything else in RC0, please raise it
>> here
>> > -- if it's after RC1 is published then we'll just cut another RC with
>> any
>> > fixes.
>> >
>> > We're down to just 5 system test failures [1], and folks are actively
>> > working to address them. At least some are known to be flaky, but we
>> still
>> > want to get them fixed.
>> >
>> > Best regards,
>> >
>> > Randall
>> >
>> > On Sun, Jul 19, 2020 at 5:45 AM Rajini Sivaram > >
>> > wrote:
>> >
>> > > Hi Randall,
>> > >
>> > > Ron found an issue with the quota implementation added under KIP-546,
>> > which
>> > > is a blocking issue for 2.6.0 since it leaks SCRAM credentials in
>> quota
>> > > responses. A fix has been merged into 2.6 branch in the commit
>> > >
>> > >
>> >
>> https://github.com/apache/kafka/commit/dd71437de7675d92ad3e4ed01ac3ee11bf5da99d
>> > > .
>> > > We
>> > > have also merged the fix for
>> > > https://issues.apache.org/jira/browse/KAFKA-10223 into 2.6 branch
>> since
>> > it
>> > > causes issues for non-Java clients during reassignments.
>> > >
>> > > Regards,
>> > >
>> > > Rajini
>> > >
>> > >
>> > > On Wed, Jul 15, 2020 at 11:41 PM Randall Hauch 
>> wrote:
>> > >
>> > > > Thanks, Levani.
>> > > >
>> > > > The content of
>> > > >
>> > > >
>> > >
>> >
>> https://home.apache.org/~rhauch/kafka-2.6.0-rc0/kafka_2.12-2.6.0-site-docs.tgz
>> > > > is the correct generated site. Somehow I messed coping that to the
>> > > > https://github.com/apache/kafka-site/tree/asf-site/26 directory.
>> I've
>> > > > corrected the latter so that
>> > https://kafka.apache.org/26/documentation/
>> > > > now
>> > > > exactly matches that documentation in RC0.
>> > > >
>> > > > Best regards,
>> > > >
>> > > > Randall
>> > > >
>> > > > On Wed, Jul 15, 2020 at 1:25 AM Levani Kokhreidze <
>> > > levani.co...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Hi Randall,
>> > > > >
>> > > > > Not sure if it’s intentional but, documentation for Kafka Streams
>> > 2.6.0
>> > > > > also contains “Streams API changes in 2.7.0”
>> > > > > https://kafka.apache.org/26/documentation/streams/upgrade-guide <
>> > > > > https://kafka.apache.org/26/documentation/streams/upgrade-guide>
>> > > > >
>> > > > > Also, there seems to be some formatting issue in 2.6.0 section.
>> > > > >
>> > > > > Levani
>> > > > >
>> > > > >
>> > > > > > On Jul 15, 2020, at 1:48 AM, Randall Hauch 
>> > wrote:
>> > > > > >
>> > > > > > Thanks for catching that, Gary. Apologies to all for announcing
>> > this
>> > > > > before
>> > > > > > pushing the docs, but that's fixed and the following links are
>> > > working
>> > > > > > (along with the others in my email):
>> > > > > >
>> > > > > > * https://kafka.apache.org/26/documentation.html
>> > > > > > * https://kafka.apache.org/26/protocol.html
>> > > > > >
>> > > > > > Randall
>> > > > > >
>> > > > > > On Tue, Jul 14, 2020 at 4:30 PM Gary Russell <
>> gruss...@vmware.com>
>> > > > > wrote:
>> > > > > >
>> > > > > >> Docs link [1] is broken.
>> > > > > >>
>> > > > > >> [1] https://kafka.apache.org/26/documentation.html
>> > > > > >>
>> > > > > >>
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: [VOTE] 2.6.0 RC1

2020-07-20 Thread Randall Hauch
When I was checking the documentation for RC1 after the tag was pushed, I
noticed that the fix Rajini mentioned in the RC0 vote thread (
https://github.com/apache/kafka/pull/8979
<https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21>)
and merged to the `2.6` branch includes the following comment about being
deprecated in 2.7:
https://github.com/apache/kafka/pull/8979/files#diff-369f0debebfcda6709beeaf11612b34bR20-R21
.

Rajini, can you please check the commits merged to the `2.6` do not have
the reference to 2.7? Since these are JavaDocs, I'm assuming that we'll
need to cut RC2.

But it'd be good for everyone else to double check this release.

Best regards,

Randall Hauch

On Mon, Jul 20, 2020 at 9:50 PM Randall Hauch  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 2.6.0. This is a
> major release that includes many new features, including:
>
> * TLSv1.3 has been enabled by default for Java 11 or newer.
> * Smooth scaling out of Kafka Streams applications
> * Kafka Streams support for emit on change
> * New metrics for better operational insight
> * Kafka Connect can automatically create topics for source connectors
> * Improved error reporting options for sink connectors in Kafka Connect
> * New Filter and conditional SMTs in Kafka Connect
> * The default value for the `client.dns.lookup` configuration is
> now `use_all_dns_ips`
> * Upgrade Zookeeper to 3.5.8
>
> This release also includes a few other features, 76 improvements, and 165
> bug fixes.
>
> Release notes for the 2.6.0 release:
> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Monday, July 20, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~rhauch/kafka-2.6.0-rc1/javadoc/
>
> * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
> https://github.com/apache/kafka/releases/tag/2.6.0-rc1
>
> * Documentation:
> https://kafka.apache.org/26/documentation.html
>
> * Protocol:
> https://kafka.apache.org/26/protocol.html
>
> * Successful Jenkins builds for the 2.6 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/91/ (one
> flaky test)
> System tests: (link to follow)
>
> Thanks,
> Randall Hauch
>


[jira] [Resolved] (KAFKA-10295) ConnectDistributedTest.test_bounce should wait for graceful stop

2020-07-20 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10295.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> ConnectDistributedTest.test_bounce should wait for graceful stop
> 
>
> Key: KAFKA-10295
> URL: https://issues.apache.org/jira/browse/KAFKA-10295
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 2.3.1, 2.5.0, 2.4.1, 2.6.0
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Minor
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1, 2.7.0
>
>
> In ConnectDistributedTest.test_bounce, there are flakey failures that appear 
> to follow this pattern:
>  # The test is parameterized for hard bounces, and with Incremental 
> Cooperative Rebalancing enabled (does not appear for protocol=eager)
>  # A source task is on a worker that will experience a hard bounce
>  # The source task has written records which it has not yet committed in 
> source offsets
>  # The worker is hard-bounced, and the source task is lost
>  # Incremental Cooperative Rebalance starts it's 
> scheduled.rebalance.max.delay.ms delay before recovering the task
>  # The test ends, connectors and Connect are stopped
>  # The test verifies that the sink connector has only written records that 
> have been committed by the source connector
>  # This verification fails because the source offsets are stale, and there 
> are un-committed records in the topic, and the sink connector has written at 
> least one of them.
> This can be addressed by ensuring that the test waits for the rebalance delay 
> to expire, and for the lost task to recover and commit offsets past the 
> progress it made before the bounce.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10286) Connect system tests should wait for workers to join group

2020-07-20 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10286.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Connect system tests should wait for workers to join group
> --
>
> Key: KAFKA-10286
> URL: https://issues.apache.org/jira/browse/KAFKA-10286
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Minor
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1, 2.7.0
>
>
> There are a few flakey test failures for {{connect_distributed_test}} in 
> which one of the workers does not join the group quickly, and the test fails 
> in the following manner:
>  # The test starts each of the connect workers, and waits for their REST APIs 
> to become available
>  # All workers start up, complete plugin scanning, and start their REST API
>  # At least one worker kicks off an asynchronous job to join the group that 
> hangs for a yet unknown reason (30s timeout)
>  # The test continues without all of the members joined
>  # The test makes a call to the REST api that it expects to succeed, and gets 
> an error
>  # The test fails without the worker ever joining the group
> Instead of allowing the test to fail in this manner, we could wait for each 
> worker to join the group with the existing 60s startup timeout. This change 
> would go into effect for all system tests using the 
> {{ConnectDistributedService}}, currently just {{connect_distributed_test}} 
> and {{connect_rest_test}}. 
> Alternatively we could retry the operation that failed, or ensure that we use 
> a known-good worker to continue the test, but these would require more 
> involved code changes. The existing wait-for-startup logic is the most 
> natural place to fix this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.6.0 RC0

2020-07-19 Thread Randall Hauch
Thanks, Rajini. Is there a Jira issue for the fix related to KIP-546? If
so, please make sure the Fix Version(s) include `2.6.0`.

I'm going to start RC1 later today and hope to get it published by Monday.
In the meantime, if anyone finds anything else in RC0, please raise it here
-- if it's after RC1 is published then we'll just cut another RC with any
fixes.

We're down to just 5 system test failures [1], and folks are actively
working to address them. At least some are known to be flaky, but we still
want to get them fixed.

Best regards,

Randall

On Sun, Jul 19, 2020 at 5:45 AM Rajini Sivaram 
wrote:

> Hi Randall,
>
> Ron found an issue with the quota implementation added under KIP-546, which
> is a blocking issue for 2.6.0 since it leaks SCRAM credentials in quota
> responses. A fix has been merged into 2.6 branch in the commit
>
> https://github.com/apache/kafka/commit/dd71437de7675d92ad3e4ed01ac3ee11bf5da99d
> .
> We
> have also merged the fix for
> https://issues.apache.org/jira/browse/KAFKA-10223 into 2.6 branch since it
> causes issues for non-Java clients during reassignments.
>
> Regards,
>
> Rajini
>
>
> On Wed, Jul 15, 2020 at 11:41 PM Randall Hauch  wrote:
>
> > Thanks, Levani.
> >
> > The content of
> >
> >
> https://home.apache.org/~rhauch/kafka-2.6.0-rc0/kafka_2.12-2.6.0-site-docs.tgz
> > is the correct generated site. Somehow I messed coping that to the
> > https://github.com/apache/kafka-site/tree/asf-site/26 directory. I've
> > corrected the latter so that https://kafka.apache.org/26/documentation/
> > now
> > exactly matches that documentation in RC0.
> >
> > Best regards,
> >
> > Randall
> >
> > On Wed, Jul 15, 2020 at 1:25 AM Levani Kokhreidze <
> levani.co...@gmail.com>
> > wrote:
> >
> > > Hi Randall,
> > >
> > > Not sure if it’s intentional but, documentation for Kafka Streams 2.6.0
> > > also contains “Streams API changes in 2.7.0”
> > > https://kafka.apache.org/26/documentation/streams/upgrade-guide <
> > > https://kafka.apache.org/26/documentation/streams/upgrade-guide>
> > >
> > > Also, there seems to be some formatting issue in 2.6.0 section.
> > >
> > > Levani
> > >
> > >
> > > > On Jul 15, 2020, at 1:48 AM, Randall Hauch  wrote:
> > > >
> > > > Thanks for catching that, Gary. Apologies to all for announcing this
> > > before
> > > > pushing the docs, but that's fixed and the following links are
> working
> > > > (along with the others in my email):
> > > >
> > > > * https://kafka.apache.org/26/documentation.html
> > > > * https://kafka.apache.org/26/protocol.html
> > > >
> > > > Randall
> > > >
> > > > On Tue, Jul 14, 2020 at 4:30 PM Gary Russell 
> > > wrote:
> > > >
> > > >> Docs link [1] is broken.
> > > >>
> > > >> [1] https://kafka.apache.org/26/documentation.html
> > > >>
> > > >>
> > >
> > >
> >
>


Re: [VOTE] 2.6.0 RC0

2020-07-15 Thread Randall Hauch
Thanks, Levani.

The content of
https://home.apache.org/~rhauch/kafka-2.6.0-rc0/kafka_2.12-2.6.0-site-docs.tgz
is the correct generated site. Somehow I messed coping that to the
https://github.com/apache/kafka-site/tree/asf-site/26 directory. I've
corrected the latter so that https://kafka.apache.org/26/documentation/ now
exactly matches that documentation in RC0.

Best regards,

Randall

On Wed, Jul 15, 2020 at 1:25 AM Levani Kokhreidze 
wrote:

> Hi Randall,
>
> Not sure if it’s intentional but, documentation for Kafka Streams 2.6.0
> also contains “Streams API changes in 2.7.0”
> https://kafka.apache.org/26/documentation/streams/upgrade-guide <
> https://kafka.apache.org/26/documentation/streams/upgrade-guide>
>
> Also, there seems to be some formatting issue in 2.6.0 section.
>
> Levani
>
>
> > On Jul 15, 2020, at 1:48 AM, Randall Hauch  wrote:
> >
> > Thanks for catching that, Gary. Apologies to all for announcing this
> before
> > pushing the docs, but that's fixed and the following links are working
> > (along with the others in my email):
> >
> > * https://kafka.apache.org/26/documentation.html
> > * https://kafka.apache.org/26/protocol.html
> >
> > Randall
> >
> > On Tue, Jul 14, 2020 at 4:30 PM Gary Russell 
> wrote:
> >
> >> Docs link [1] is broken.
> >>
> >> [1] https://kafka.apache.org/26/documentation.html
> >>
> >>
>
>


Re: [VOTE] 2.6.0 RC0

2020-07-14 Thread Randall Hauch
Thanks for catching that, Gary. Apologies to all for announcing this before
pushing the docs, but that's fixed and the following links are working
(along with the others in my email):

* https://kafka.apache.org/26/documentation.html
* https://kafka.apache.org/26/protocol.html

Randall

On Tue, Jul 14, 2020 at 4:30 PM Gary Russell  wrote:

> Docs link [1] is broken.
>
> [1] https://kafka.apache.org/26/documentation.html
>
>


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-07-14 Thread Randall Hauch
I've just announced 2.6.0 RC0 in a vote thread to this list. If you find
any issues, please reply to that "[VOTE] 2.6.0 RC0" thread.

Thanks, and best regards!

Randall

On Fri, Jul 10, 2020 at 7:44 PM Matthias J. Sax  wrote:

> Randall,
>
> we found another blocker:
> https://issues.apache.org/jira/browse/KAFKA-10262
>
> Luckily, we have already a PR for it.
>
>
> -Matthias
>
>
> On 7/8/20 3:05 PM, Sophie Blee-Goldman wrote:
> > Hey Randall,
> >
> > We just discovered another regression in 2.6:
> > https://issues.apache.org/jira/browse/KAFKA-10249
> >
> > The fix is extremely straightforward -- only about two lines of actual
> > code -- and low risk. It is a new regression introduced in 2.6 and
> affects
> > all Streams apps with any suppression or other in-memory state.
> >
> > The PR is already ready here: https://github.com/apache/kafka/pull/8996
> >
> > Best,
> > Sophie
> >
> > On Wed, Jul 8, 2020 at 10:59 AM John Roesler 
> wrote:
> >
> >> Hi Randall,
> >>
> >> While developing system tests, I've just unearthed a new 2.6 regression:
> >> https://issues.apache.org/jira/browse/KAFKA-10247
> >>
> >> I've got a PR in progress. Hoping to finish it up today:
> >> https://github.com/apache/kafka/pull/8994
> >>
> >> Sorry for the trouble,
> >> -John
> >>
> >> On Mon, Jun 29, 2020, at 09:29, Randall Hauch wrote:
> >>> Thanks for raising this, David. I agree it makes sense to include this
> >> fix
> >>> in 2.6, so I've adjusted the "Fix Version(s)" field to include '2.6.0'.
> >>>
> >>> Best regards,
> >>>
> >>> Randall
> >>>
> >>> On Mon, Jun 29, 2020 at 8:25 AM David Jacot 
> wrote:
> >>>
> >>>> Hi Randall,
> >>>>
> >>>> We have discovered an annoying issue that we introduced in 2.5:
> >>>>
> >>>> Describing topics with the command line tool fails if the user does
> not
> >>>> have the
> >>>> privileges to access the ListPartitionReassignments API. I believe
> that
> >>>> this is the
> >>>> case for most non-admin users.
> >>>>
> >>>> I propose to include the fix in 2.6. The fix is trivial so low risk.
> >> What
> >>>> do you think?
> >>>>
> >>>> JIRA: https://issues.apache.org/jira/browse/KAFKA-10212
> >>>> PR: https://github.com/apache/kafka/pull/8947
> >>>>
> >>>> Best,
> >>>> David
> >>>>
> >>>> On Sat, Jun 27, 2020 at 4:39 AM John Roesler 
> >> wrote:
> >>>>
> >>>>> Hi Randall,
> >>>>>
> >>>>> I neglected to notify this thread when I merged the fix for
> >>>>> https://issues.apache.org/jira/browse/KAFKA-10185
> >>>>> on June 19th. I'm sorry about that oversight. It is marked with
> >>>>> a fix version of 2.6.0.
> >>>>>
> >>>>> On a side node, I have a fix for KAFKA-10173, which I'm merging
> >>>>> and backporting right now.
> >>>>>
> >>>>> Thanks for managing the release,
> >>>>> -John
> >>>>>
> >>>>> On Thu, Jun 25, 2020, at 10:23, Randall Hauch wrote:
> >>>>>> Thanks for the update, folks!
> >>>>>>
> >>>>>> Based upon Jira [1], we currently have 4 issues that are considered
> >>>>>> blockers for the 2.6.0 release and production of RCs:
> >>>>>>
> >>>>>>- https://issues.apache.org/jira/browse/KAFKA-10134 - High CPU
> >>>> issue
> >>>>>>during rebalance in Kafka consumer after upgrading to 2.5
> >>>> (unassigned)
> >>>>>>- https://issues.apache.org/jira/browse/KAFKA-10143 - Can no
> >> longer
> >>>>>>change replication throttle with reassignment tool (Jason G)
> >>>>>>- https://issues.apache.org/jira/browse/KAFKA-10166 - Excessive
> >>>>>>TaskCorruptedException seen in testing (Sophie, Bruno)
> >>>>>>- https://issues.apache.org/jira/browse/KAFKA-10173
> >>>>>>- BufferUnderflowException during Kafka Streams Upgrade (John R)
> >>>>>>
> >>&

[VOTE] 2.6.0 RC0

2020-07-14 Thread Randall Hauch
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 2.6.0. This is a
major release that includes many new features, including:

* TLSv1.3 has been enabled by default for Java 11 or newer.
* Smooth scaling out of Kafka Streams applications
* Kafka Streams support for emit on change
* New metrics for better operational insight
* Kafka Connect can automatically create topics for source connectors
* Improved error reporting options for sink connectors in Kafka Connect
* New Filter and conditional SMTs in Kafka Connect
* The default value for the `client.dns.lookup` configuration is
now `use_all_dns_ips`
* Upgrade Zookeeper to 3.5.8

This release also includes a few other features, 76 improvements, and 165
bug fixes.

Release notes for the 2.6.0 release:
https://home.apache.org/~rhauch/kafka-2.6.0-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Monday, July 20, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~rhauch/kafka-2.6.0-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~rhauch/kafka-2.6.0-rc0/javadoc/

* Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
https://github.com/apache/kafka/releases/tag/2.6.0-rc0

* Documentation:
https://kafka.apache.org/26/documentation.html

* Protocol:
https://kafka.apache.org/26/protocol.html

* Successful Jenkins builds for the 2.6 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/80/
System tests: (link to follow)

Thanks,
Randall Hauch


[jira] [Reopened] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-5722:
--

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5722.
--
Resolution: Duplicate

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5722.
--
Resolution: Fixed

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-5722:
--

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9018) Kafka Connect - throw clearer exceptions on serialisation errors

2020-07-01 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9018.
--
Resolution: Fixed

> Kafka Connect - throw clearer exceptions on serialisation errors
> 
>
> Key: KAFKA-9018
> URL: https://issues.apache.org/jira/browse/KAFKA-9018
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Robin Moffatt
>Assignee: Mario Molina
>Priority: Minor
> Fix For: 2.7.0
>
>
> When Connect fails on a deserialisation error, it doesn't show if that's the 
> *key or value* that's thrown the error, nor does it give the user any 
> indication of the *topic/partition/offset* of the message. Kafka Connect 
> should be improved to return this information.
> Example message that user will get (in this case caused by reading non-Avro 
> data with the Avro converter)
> {code:java}
> Caused by: org.apache.kafka.connect.errors.DataException: Failed to 
> deserialize data for topic sample_topic to Avro:
>  at 
> io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:110)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:487)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>  ... 13 more
>  Caused by: org.apache.kafka.common.errors.SerializationException: Error 
> deserializing Avro message for id -1
>  Caused by: org.apache.kafka.common.errors.SerializationException: Unknown 
> magic byte!{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10153) Error Reporting in Connect Documentation

2020-07-01 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10153.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Error Reporting in Connect Documentation
> 
>
> Key: KAFKA-10153
> URL: https://issues.apache.org/jira/browse/KAFKA-10153
> Project: Kafka
>  Issue Type: Task
>  Components: documentation, KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Aakash Shah
>Assignee: Aakash Shah
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
>
> Add documentation for error reporting in Kafka Connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-06-29 Thread Randall Hauch
Thanks for raising this, David. I agree it makes sense to include this fix
in 2.6, so I've adjusted the "Fix Version(s)" field to include '2.6.0'.

Best regards,

Randall

On Mon, Jun 29, 2020 at 8:25 AM David Jacot  wrote:

> Hi Randall,
>
> We have discovered an annoying issue that we introduced in 2.5:
>
> Describing topics with the command line tool fails if the user does not
> have the
> privileges to access the ListPartitionReassignments API. I believe that
> this is the
> case for most non-admin users.
>
> I propose to include the fix in 2.6. The fix is trivial so low risk. What
> do you think?
>
> JIRA: https://issues.apache.org/jira/browse/KAFKA-10212
> PR: https://github.com/apache/kafka/pull/8947
>
> Best,
> David
>
> On Sat, Jun 27, 2020 at 4:39 AM John Roesler  wrote:
>
> > Hi Randall,
> >
> > I neglected to notify this thread when I merged the fix for
> > https://issues.apache.org/jira/browse/KAFKA-10185
> > on June 19th. I'm sorry about that oversight. It is marked with
> > a fix version of 2.6.0.
> >
> > On a side node, I have a fix for KAFKA-10173, which I'm merging
> > and backporting right now.
> >
> > Thanks for managing the release,
> > -John
> >
> > On Thu, Jun 25, 2020, at 10:23, Randall Hauch wrote:
> > > Thanks for the update, folks!
> > >
> > > Based upon Jira [1], we currently have 4 issues that are considered
> > > blockers for the 2.6.0 release and production of RCs:
> > >
> > >- https://issues.apache.org/jira/browse/KAFKA-10134 - High CPU
> issue
> > >during rebalance in Kafka consumer after upgrading to 2.5
> (unassigned)
> > >- https://issues.apache.org/jira/browse/KAFKA-10143 - Can no longer
> > >change replication throttle with reassignment tool (Jason G)
> > >- https://issues.apache.org/jira/browse/KAFKA-10166 - Excessive
> > >TaskCorruptedException seen in testing (Sophie, Bruno)
> > >- https://issues.apache.org/jira/browse/KAFKA-10173
> > >- BufferUnderflowException during Kafka Streams Upgrade (John R)
> > >
> > > and one critical issue that may be a regression that at this time will
> > not
> > > block production of RCs:
> > >
> > >- https://issues.apache.org/jira/browse/KAFKA-10017 - Flaky Test
> > >EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta
> > (Matthias)
> > >
> > > and one build/release issue we'd like to fix if possible but will not
> > block
> > > RCs or the release:
> > >
> > >- https://issues.apache.org/jira/browse/KAFKA-9381
> > >- kafka-streams-scala: Javadocs + Scaladocs not published on maven
> > central
> > >(me)
> > >
> > > I'm working with the assignees and reporters of these issues (via
> > comments
> > > on the issues) to identify an ETA and to track progress. Anyone is
> > welcome
> > > to chime in on those issues.
> > >
> > > At this time, no other changes (other than PRs that only fix/improve
> > tests)
> > > should be merged to the `2.6` branch. If you think you've identified a
> > new
> > > blocker issue or believe another existing issue should be treated as a
> > > blocker for 2.6.0, please mark the issue's `fix version` as `2.6.0`
> _and_
> > > respond to this thread with details, and I will work with you to
> > determine
> > > whether it is indeed a blocker.
> > >
> > > As always, let me know here if you have any questions/concerns.
> > >
> > > Best regards,
> > >
> > > Randall
> > >
> > > [1] https://issues.apache.org/jira/projects/KAFKA/versions/12346918
> > >
> > > On Thu, Jun 25, 2020 at 8:27 AM Mario Molina 
> wrote:
> > >
> > > > Hi Randal,
> > > >
> > > > Ticket https://issues.apache.org/jira/browse/KAFKA-9018 is not a
> > blocker
> > > > so
> > > > it can be moved to the 2.7.0 version.
> > > >
> > > > Mario
> > > >
> > > > On Wed, 24 Jun 2020 at 20:22, Boyang Chen <
> reluctanthero...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hey Randal,
> > > > >
> > > > > There was another spotted blocker:
> > > > > https://issues.apache.org/jira/browse/KAFKA-10173
> > > > > As of current, John is working on a fix.
> > > > >
> >

Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-06-25 Thread Randall Hauch
Thanks for the update, folks!

Based upon Jira [1], we currently have 4 issues that are considered
blockers for the 2.6.0 release and production of RCs:

   - https://issues.apache.org/jira/browse/KAFKA-10134 - High CPU issue
   during rebalance in Kafka consumer after upgrading to 2.5 (unassigned)
   - https://issues.apache.org/jira/browse/KAFKA-10143 - Can no longer
   change replication throttle with reassignment tool (Jason G)
   - https://issues.apache.org/jira/browse/KAFKA-10166 - Excessive
   TaskCorruptedException seen in testing (Sophie, Bruno)
   - https://issues.apache.org/jira/browse/KAFKA-10173
   - BufferUnderflowException during Kafka Streams Upgrade (John R)

and one critical issue that may be a regression that at this time will not
block production of RCs:

   - https://issues.apache.org/jira/browse/KAFKA-10017 - Flaky Test
   EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta (Matthias)

and one build/release issue we'd like to fix if possible but will not block
RCs or the release:

   - https://issues.apache.org/jira/browse/KAFKA-9381
   - kafka-streams-scala: Javadocs + Scaladocs not published on maven central
   (me)

I'm working with the assignees and reporters of these issues (via comments
on the issues) to identify an ETA and to track progress. Anyone is welcome
to chime in on those issues.

At this time, no other changes (other than PRs that only fix/improve tests)
should be merged to the `2.6` branch. If you think you've identified a new
blocker issue or believe another existing issue should be treated as a
blocker for 2.6.0, please mark the issue's `fix version` as `2.6.0` _and_
respond to this thread with details, and I will work with you to determine
whether it is indeed a blocker.

As always, let me know here if you have any questions/concerns.

Best regards,

Randall

[1] https://issues.apache.org/jira/projects/KAFKA/versions/12346918

On Thu, Jun 25, 2020 at 8:27 AM Mario Molina  wrote:

> Hi Randal,
>
> Ticket https://issues.apache.org/jira/browse/KAFKA-9018 is not a blocker
> so
> it can be moved to the 2.7.0 version.
>
> Mario
>
> On Wed, 24 Jun 2020 at 20:22, Boyang Chen 
> wrote:
>
> > Hey Randal,
> >
> > There was another spotted blocker:
> > https://issues.apache.org/jira/browse/KAFKA-10173
> > As of current, John is working on a fix.
> >
> > Boyang
> >
> > On Wed, Jun 24, 2020 at 4:08 PM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Hey all,
> > >
> > > Just a heads up that we discovered a new blocker. The fix is pretty
> > > straightforward
> > > and there's already a PR for it so it should be resolved quickly.
> > >
> > > Here's the ticket: https://issues.apache.org/jira/browse/KAFKA-10198
> > >
> > > On Sat, May 30, 2020 at 12:52 PM Randall Hauch 
> wrote:
> > >
> > > > Hi, Kowshik,
> > > >
> > > > Thanks for the update on KIP-584. This is listed on the "Postponed"
> > > section
> > > > of the AK 2.6.0 release plan (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > > ).
> > > >
> > > > Best regards,
> > > >
> > > > Randall
> > > >
> > > > On Fri, May 29, 2020 at 4:51 PM Kowshik Prakasam <
> > kpraka...@confluent.io
> > > >
> > > > wrote:
> > > >
> > > > > Hi Randall,
> > > > >
> > > > > We have to remove KIP-584 from the release plan, as this item will
> > not
> > > be
> > > > > completed for 2.6 release (although KIP is accepted). We plan to
> > > include
> > > > it
> > > > > in a next release.
> > > > >
> > > > >
> > > > > Cheers,
> > > > > Kowshik
> > > > >
> > > > >
> > > > > On Fri, May 29, 2020 at 11:43 AM Maulin Vasavada <
> > > > > maulin.vasav...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi Randall Hauch
> > > > > >
> > > > > > Can we add KIP-519 to 2.6? It was merged to Trunk already in
> April
> > -
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
> > > > > > .
> > > > > >
> > > > > > Thanks
> > > > > > Maulin
> > > > > >
> > > > > > On Fri, May 29, 2020 at 11:01 AM Randa

[jira] [Resolved] (KAFKA-10147) MockAdminClient#describeConfigs(Collection) is unable to handle broker resource

2020-06-17 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10147.
---
Fix Version/s: 2.7.0
 Reviewer: Boyang Chen
   Resolution: Fixed

Merged to `trunk` and backported to `2.6`.

> MockAdminClient#describeConfigs(Collection) is unable to 
> handle broker resource
> ---
>
> Key: KAFKA-10147
> URL: https://issues.apache.org/jira/browse/KAFKA-10147
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 2.6.0, 2.7.0
>
>
> MockAdminClient#describeConfigs(Collection) has new 
> implementation introduced by 
> https://github.com/apache/kafka/commit/48b56e533b3ff22ae0e2cf7fcc649e7df19f2b06.
>  It does not handle broker resource so 
> ReassignPartitionsUnitTest#testModifyBrokerThrottles throws NPE



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7239) Kafka Connect secret externalization not working

2020-06-16 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7239.
--
Resolution: Invalid

Closing at the request of the reporter.

> Kafka Connect secret externalization not working
> 
>
> Key: KAFKA-7239
> URL: https://issues.apache.org/jira/browse/KAFKA-7239
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: satyanarayan komandur
>Priority: Major
>
> I used the Kafka FileConfigProvider to externalize the properties like 
> connection.user and connection.password for JDBC source connector. I noticed 
> that the values in the connection properties are being replaced after the 
> connector has attempted to establish a connection with original key/value 
> pairs (untransformed). This is resulting a failure in connection. I am not 
> sure if this issue belong to Kafka Connector framework or its an issue with 
> JDBC Source Connector



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9374) Worker can be disabled by blocked connectors

2020-06-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9374.
--
  Reviewer: Konstantine Karantasis
Resolution: Fixed

Merged to `trunk` and backported to the `2.6` branch for inclusion in 2.6.0.

> Worker can be disabled by blocked connectors
> 
>
> Key: KAFKA-9374
> URL: https://issues.apache.org/jira/browse/KAFKA-9374
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0
>
>
> If a connector hangs during any of its {{initialize}}, {{start}}, {{stop}}, 
> \{taskConfigs}}, {{taskClass}}, {{version}}, {{config}}, or {{validate}} 
> methods, the worker will be disabled for some types of requests thereafter, 
> including connector creation, connector reconfiguration, and connector 
> deletion.
>  -This only occurs in distributed mode and is due to the threading model used 
> by the 
> [DistributedHerder|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java]
>  class.- This affects both distributed and standalone mode. Distributed 
> herders perform some connector work synchronously in their {{tick}} thread, 
> which also handles group membership and some REST requests. The majority of 
> the herder methods for the standalone herder are {{synchronized}}, including 
> those for creating, updating, and deleting connectors; as long as one of 
> those methods blocks, all subsequent calls to any of these methods will also 
> be blocked.
>  
> One potential solution could be to treat connectors that fail to start, stop, 
> etc. in time similarly to tasks that fail to stop within the [task graceful 
> shutdown timeout 
> period|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java#L121-L126]
>  by handling all connector interactions on a separate thread, waiting for 
> them to complete within a timeout, and abandoning the thread (and 
> transitioning the connector to the {{FAILED}} state, if it has been created 
> at all) if that timeout expires.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9969) ConnectorClientConfigRequest is loaded in isolation and throws LinkageError

2020-06-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9969.
--
  Reviewer: Konstantine Karantasis
Resolution: Fixed

[~kkonstantine] merged to `trunk` and backported to:
* `2.6` for inclusion in upcoming 2.6.0
* `2.5` for inclusion in upcoming 2.5.1
* `2.4` for inclusion in a future 2.4.2
* `2.3` for inclusion in a future 2.3.2

> ConnectorClientConfigRequest is loaded in isolation and throws LinkageError
> ---
>
> Key: KAFKA-9969
> URL: https://issues.apache.org/jira/browse/KAFKA-9969
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> ConnectorClientConfigRequest (added by 
> [KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy])
>  is a class in connect-api, and should always be loaded by the system 
> classloader. If a plugin packages the connect-api jar, the REST API may fail 
> with the following stacktrace:
> {noformat}
> java.lang.LinkageError: loader constraint violation: loader (instance of 
> sun/misc/Launcher$AppClassLoader) previously initiated loading for a 
> different type with name 
> "org/apache/kafka/connect/connector/policy/ConnectorClientConfigRequest" at 
> java.lang.ClassLoader.defineClass1(Native Method) at 
> java.lang.ClassLoader.defineClass(ClassLoader.java:763) at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at 
> java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at 
> java.net.URLClassLoader.access$100(URLClassLoader.java:74) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:369) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:363) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:362) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
> org.apache.kafka.connect.runtime.AbstractHerder.validateClientOverrides(AbstractHerder.java:416)
>  at 
> org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:342)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:745)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:742)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:342)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ... 1 more
> {noformat}
> It appears that the other class in org.apache.kafka.connect.connector.policy, 
> ConnectorClientConfigOverridePolicy had a similar issue in KAFKA-8415, and 
> received a fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9216) Enforce connect internal topic configuration at startup

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9216.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk` the second PR that enforces the `cleanup.policy` topic 
setting on Connect's three internal topics, and cherry-picked it to the `2.6` 
(for upcoming 2.6.0). However, merging to earlier branches requires too many 
changes in integration tests.

> Enforce connect internal topic configuration at startup
> ---
>
> Key: KAFKA-9216
> URL: https://issues.apache.org/jira/browse/KAFKA-9216
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Evelyn Bayes
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Users sometimes configure Connect's internal topic for configurations with 
> more than one partition. One partition is expected, however, and using more 
> than one leads to weird behavior that is sometimes not easy to spot.
> Here's one example of a log message:
> {noformat}
> "textPayload": "[2019-11-20 11:12:14,049] INFO [Worker clientId=connect-1, 
> groupId=td-connect-server] Current config state offset 284 does not match 
> group assignment 274. Forcing rebalance. 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:942)\n"
> {noformat}
> Would it be possible to add a check in the KafkaConfigBackingStore and 
> prevent the worker from starting if connect config partition count !=1 ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9845) plugin.path property does not work with config provider

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9845.
--
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk`, and backported to `2.6` (for upcoming 2.6.0), `2.5` (for 
upcoming 2.5.1), and `2.4` (for future 2.4.2).

> plugin.path property does not work with config provider
> ---
>
> Key: KAFKA-9845
> URL: https://issues.apache.org/jira/browse/KAFKA-9845
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Minor
> Fix For: 2.6.0, 2.4.2, 2.5.1, 2.7.0
>
>
> The config provider mechanism doesn't work if used for the {{plugin.path}} 
> property of a standalone or distributed Connect worker. This is because the 
> {{Plugins}} instance which performs plugin path scanning is created using the 
> raw worker config, pre-transformation (see 
> [ConnectStandalone|https://github.com/apache/kafka/blob/371ad143a6bb973927c89c0788d048a17ebac91a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java#L79]
>  and 
> [ConnectDistributed|https://github.com/apache/kafka/blob/371ad143a6bb973927c89c0788d048a17ebac91a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java#L91]).
> Unfortunately, because config providers are loaded as plugins, there's a 
> circular dependency issue here. The {{Plugins}} instance needs to be created 
> _before_ the {{DistributedConfig}}/{{StandaloneConfig}} is created in order 
> for the config providers to be loaded correctly, and the config providers 
> need to be loaded in order to perform their logic on any properties 
> (including the {{plugin.path}} property).
> There is no clear fix for this issue in the code base, and the only known 
> workaround is to refrain from using config providers for the {{plugin.path}} 
> property.
> A couple improvements could potentially be made to improve the UX when this 
> issue arises:
>  #  Alter the config logging performed by the {{DistributedConfig}} and 
> {{StandaloneConfig}} classes to _always_ log the raw value for the 
> {{plugin.path}} property. Right now, the transformed value is logged even 
> though it isn't used, which is likely to cause confusion.
>  # Issue a {{WARN}}- or even {{ERROR}}-level log message when it's detected 
> that the user is attempting to use config providers for the {{plugin.path}} 
> property, which states that config providers cannot be used for that specific 
> property, instructs them to change the value for the property accordingly, 
> and/or informs them of the actual value that the framework will use for that 
> property when performing plugin path scanning.
> We should _not_ throw an error on startup if this condition is detected, as 
> this could cause previously-functioning, benignly-misconfigured Connect 
> workers to fail to start after an upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6942) Connect connectors api doesn't show versions of connectors

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-6942.
--
Resolution: Invalid

I'm going to close this as INVALID because the versions are available in the 
API, as noted above.

> Connect connectors api doesn't show versions of connectors
> --
>
> Key: KAFKA-6942
> URL: https://issues.apache.org/jira/browse/KAFKA-6942
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Antony Stubbs
>Priority: Minor
>  Labels: needs-kip
>
> Would be very useful to have the connector list API response also return the 
> version of the installed connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10115) Incorporate errors.tolerance with the Errant Record Reporter

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10115.
---
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `2.6` rather than `trunk` (accidentally) and cherry-picked to `trunk`.

> Incorporate errors.tolerance with the Errant Record Reporter
> 
>
> Key: KAFKA-10115
> URL: https://issues.apache.org/jira/browse/KAFKA-10115
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Aakash Shah
>Assignee: Aakash Shah
>Priority: Major
> Fix For: 2.6.0
>
>
> The errors.tolerance config is currently not being used when using the Errant 
> Record Reporter. If errors.tolerance is none then the task should fail after 
> sending it to the DLQ in Kafka.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9066) Kafka Connect JMX : source & sink task metrics missing for tasks in failed state

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9066.
--
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk`, and backported to `2.6` (for upcoming 2.6.0). I'll file a 
separate issue to backport this to `2.5` (since we're in-progress on releasing 
2.5.1) and `2.4`.

> Kafka Connect JMX : source & sink task metrics missing for tasks in failed 
> state
> 
>
> Key: KAFKA-9066
> URL: https://issues.apache.org/jira/browse/KAFKA-9066
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.1.1
>Reporter: Mikołaj Stefaniak
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
>
> h2. Overview
> Kafka Connect exposes various metrics via JMX. Those metrics can be exported 
> i.e. by _Prometheus JMX Exporter_ for further processing.
> One of crucial attributes is connector's *task status.*
> According to official Kafka docs, status is available as +status+ attribute 
> of following MBean:
> {quote}kafka.connect:type=connector-task-metrics,connector="\{connector}",task="\{task}"status
>  - The status of the connector task. One of 'unassigned', 'running', 
> 'paused', 'failed', or 'destroyed'.
> {quote}
> h2. Issue
> Generally +connector-task-metrics+ are exposed propery for tasks in +running+ 
> status but not exposed at all if task is +failed+.
> Failed Task *appears* properly with failed status when queried via *REST API*:
>  
> {code:java}
> $ curl -X GET -u 'user:pass' 
> http://kafka-connect.mydomain.com/connectors/customerconnector/status
> {"name":"customerconnector","connector":{"state":"RUNNING","worker_id":"kafka-connect.mydomain.com:8080"},"tasks":[{"id":0,"state":"FAILED","worker_id":"kafka-connect.mydomain.com:8080","trace":"org.apache.kafka.connect.errors.ConnectException:
>  Received DML 'DELETE FROM mysql.rds_sysinfo .."}],"type":"source"}
> $ {code}
>  
> Failed Task *doesn't appear* as bean with +connector-task-metrics+ type when 
> queried via *JMX*:
>  
> {code:java}
> $ echo "beans -d kafka.connect" | java -jar 
> target/jmxterm-1.1.0-SNAPSHOT-uber.jar -l localhost:8081 -n -v silent | grep 
> connector=customerconnector
> kafka.connect:connector=customerconnector,task=0,type=task-error-metricskafka.connect:connector=customerconnector,type=connector-metrics
> $
> {code}
> h2. Expected result
> It is expected, that bean with +connector-task-metrics+ type will appear also 
> for tasks that failed.
> Below is example of how beans are properly registered for tasks in Running 
> state:
>  
> {code:java}
> $ echo "get -b 
> kafka.connect:connector=sinkConsentSubscription-1000,task=0,type=connector-task-metrics
>  status" | java -jar target/jmxterm-1.1.0-SNAPSHOT-ube r.jar -l 
> localhost:8081 -n -v silent
> status = running;
> $
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10146) Backport KAFKA-9066 to 2.5 and 2.4 branches

2020-06-10 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10146:
-

 Summary: Backport KAFKA-9066 to 2.5 and 2.4 branches
 Key: KAFKA-10146
 URL: https://issues.apache.org/jira/browse/KAFKA-10146
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.4.2, 2.5.2


KAFKA-9066 was merged on the same day we were trying to release 2.5.1, so this 
was not backported at the time. However, once 2.5.1 is out the door, the 
`775f0d484` commit on `trunk` should be backported to the `2.5` and `2.4` 
branches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9468) config.storage.topic partition count issue is hard to debug

2020-06-07 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9468.
--
  Assignee: Randall Hauch
Resolution: Fixed

> config.storage.topic partition count issue is hard to debug
> ---
>
> Key: KAFKA-9468
> URL: https://issues.apache.org/jira/browse/KAFKA-9468
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.0.2, 1.1.1, 2.0.1, 2.1.1, 2.2.2, 2.4.0, 2.3.1
>Reporter: Evelyn Bayes
>    Assignee: Randall Hauch
>Priority: Minor
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> When you run connect distributed with 2 or more workers and 
> config.storage.topic has more then 1 partition, you can end up with one of 
> the workers rebalancing endlessly:
> [2020-01-13 12:53:23,535] INFO [Worker clientId=connect-1, 
> groupId=connect-cluster] Current config state offset 37 is behind group 
> assignment 63, reading to end of config log 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
>  [2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, 
> groupId=connect-cluster] Finished reading to end of log and updated config 
> snapshot, new config log offset: 37 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
>  [2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, 
> groupId=connect-cluster] Current config state offset 37 does not match group 
> assignment 63. Forcing rebalance. 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
>  
> In case any person viewing this doesn't know you are only ever meant to 
> create this topic with one partition.
>  
> *Suggested Solution*
> Make the connect worker check the partition count when it starts and if 
> partition count is > 1 Kafka Connect stops and logs the reason why.
> I think this is reasonable as it would stop users just starting out from 
> building it incorrectly and would be easy to fix early. For those upgrading 
> this would easily be caught in a PRE-PROD environment. And even if they 
> upgraded directly in PROD you would only be impacted if upgraded all connect 
> workers at the same time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-9216) Enforce connect internal topic configuration at startup

2020-06-07 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-9216:
--

The previous PR only checked the number of partitions, so I'm going to reopen 
this to add another PR that checks the internal topic cleanup policy, which 
should be `compact` (only), and should not be `delete,compact` or `delete`. 
Using any other topic cleanup policy for the internal topics can lead to lost 
configurations, source offsets, or statuses.

> Enforce connect internal topic configuration at startup
> ---
>
> Key: KAFKA-9216
> URL: https://issues.apache.org/jira/browse/KAFKA-9216
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Evelyn Bayes
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Users sometimes configure Connect's internal topic for configurations with 
> more than one partition. One partition is expected, however, and using more 
> than one leads to weird behavior that is sometimes not easy to spot.
> Here's one example of a log message:
> {noformat}
> "textPayload": "[2019-11-20 11:12:14,049] INFO [Worker clientId=connect-1, 
> groupId=td-connect-server] Current config state offset 284 does not match 
> group assignment 274. Forcing rebalance. 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:942)\n"
> {noformat}
> Would it be possible to add a check in the KafkaConfigBackingStore and 
> prevent the worker from starting if connect config partition count !=1 ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9570) SSL cannot be configured for Connect in standalone mode

2020-06-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9570.
--
Fix Version/s: 2.5.1
   2.4.2
   2.6.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` and backported to the `2.6`, `2.5` and `2.4` branches.

> SSL cannot be configured for Connect in standalone mode
> ---
>
> Key: KAFKA-9570
> URL: https://issues.apache.org/jira/browse/KAFKA-9570
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2, 2.3.0, 2.1.2, 
> 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.4.2, 2.5.1
>
>
> When Connect is brought up in standalone, if the worker config contains _any_ 
> properties that begin with the {{listeners.https.}} prefix, SSL will not be 
> enabled on the worker.
> This is because the relevant SSL configs are only defined in the [distributed 
> worker 
> config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedConfig.java#L260]
>  instead of the [superclass worker 
> config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java].
>  This, in conjunction with [a call 
> to|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L42]
>  
> [AbstractConfig::valuesWithPrefixAllOrNothing|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java],
>  causes all configs not defined in the {{WorkerConfig}} used by the worker to 
> be silently dropped when the worker configures its REST server if there is at 
> least one config present with the {{listeners.https.}} prefix.
> Unfortunately, the workaround of specifying all SSL configs without the 
> {{listeners.https.}} prefix will also fail if any passwords need to be 
> specified. This is because the password values in the {{Map}} returned from 
> {{AbstractConfig::valuesWithPrefixAllOrNothing}} aren't parsed as passwords, 
> but the [framework expects them to 
> be|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L87].
>  However, if no keystore, truststore, or key passwords need to be configured, 
> then it should be possible to work around the issue by specifying all of 
> those configurations without a prefix (as long as they don't conflict with 
> any other configs in that namespace).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-158 UPDATED: Enable source connectors to create new topics with specific configs in Kafka Connect during runtime

2020-06-05 Thread Randall Hauch
but
> > I
> > > > > don't think we have a strong case for making this functionality
> > > pluggable
> > > > > at the moment. Topics are not very transient entities in Kafka. And
> > > this
> > > > > feature is focusing specifically on topic creation and does not
> > suggest
> > > > > altering configuration of existing topics, including topics that
> may
> > be
> > > > > created once by a connector that will use this new functionality.
> > > > > Therefore, adapting to changes to the attainable replication factor
> > > > during
> > > > > runtime, without expressing this in the configuration of a
> connector
> > > > seems
> > > > > to involve more risks than benefits. Overall, a generic topic
> > creation
> > > > hook
> > > > > shares similarities to exposing an admin client to the connector
> > itself
> > > > and
> > > > > based on previous discussions, seems that this approach will result
> > in
> > > > > considerable extensions in both configuration and implementation
> > > without
> > > > it
> > > > > being fully justified at the moment.
> > > > >
> > > > > I suggest moving forward without pluggable classes for now, and if
> in
> > > the
> > > > > future we wish to return to this topic for second iteration, then
> > > > factoring
> > > > > out the proposed functionality under the configuration of a module
> > that
> > > > > applies topic creation based on regular expressions should be easy
> to
> > > do
> > > > in
> > > > > a compatible way.
> > > > >
> > > > > Best,
> > > > > Konstantine
> > > > >
> > > > >
> > > > > On Thu, Dec 12, 2019 at 1:37 PM Ryanne Dolan <
> ryannedo...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Konstantine, thanks for the updates. I wonder if we should take
> > your
> > > > > > proposal one step further and make this pluggable. Your
> > > include/exclude
> > > > > > regexes are great out-of-the-box features, but it may be valuable
> > to
> > > > > > plug-in more sophisticated logic to handle topic creation.
> > > > > >
> > > > > > Instead of enabling/disabling the feature as a whole, the default
> > > > > > TopicCreator (or whatever) could be a nop. Then we include a
> > > > > > RegexTopicCreator with your proposed behavior. This would be
> almost
> > > > > > indistinguishable from your current KIP from a user's
> perspective,
> > > but
> > > > > > would enable plug-in TopicCreators that do some of the things you
> > > have
> > > > > > listed in the Rejected Alternatives, e.g. to automatically adjust
> > the
> > > > > > replication factor based on the number of nodes, etc.
> > > > > >
> > > > > > My team leverages Connect's plug-ins in other places to enable
> > > seamless
> > > > > > integration with the rest of our platform. We would definitely
> use
> > a
> > > > > topic
> > > > > > creation hook if one existed. In particular, we have a concept of
> > > > "topic
> > > > > > profiles" that we could use here.
> > > > > >
> > > > > > Ryanne
> > > > > >
> > > > > > On Thu, Dec 12, 2019 at 2:00 PM Konstantine Karantasis <
> > > > > > konstant...@confluent.io> wrote:
> > > > > >
> > > > > > > I've taken a second look to KIP-158 after syncing with Randall
> > > Hauch,
> > > > > who
> > > > > > > was the original author of the proposal, and I have updated the
> > KIP
> > > > in
> > > > > > > place.
> > > > > > >
> > > > > > > The main new features of this updated KIP-158 is the
> introduction
> > > of
> > > > > > groups
> > > > > > > of configs that can be composed and the ability to match topics
> > to
> > > > > these
> > > > > > > groups via the use of regex. The design builds on top of the
> > > existing
> > > > > > > definition of config groups used in single message
> > transformations
> > > > > (SMT)
> > > > > > > and therefore I'm hoping that the approach fits well in Kafka
> > > > Connect's
> > > > > > > current configuration capabilities.
> > > > > > >
> > > > > > > The new proposal aims to strike a good balance between
> requiring
> > to
> > > > > > > explicitly set the configs for each possible topic or having a
> > > > > > > one-size-fits-all default set of properties for all the topics
> a
> > > > > > connector
> > > > > > > may create during runtime.
> > > > > > >
> > > > > > >
> > > > > > > The updated KIP-158 can be found under the same page as the old
> > > one:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics
> > > > > > >
> > > > > > > I've intentionally changed the title here in this thread to
> avoid
> > > > > > confusion
> > > > > > > with the threads that discussed KIP-158 previously.
> > > > > > > Looking forward to your comments and hoping we can pick up this
> > > work
> > > > > from
> > > > > > > the very good starting point that was reached in the previous
> > > > > > discussions.
> > > > > > >
> > > > > > >
> > > > > > > Best,
> > > > > > > Konstantine
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > -Jose
> >
>


Re: [VOTE] KIP-610: Error Reporting in Sink Connectors

2020-06-05 Thread Randall Hauch
Thanks again to everyone for all the work on this KIP and implementation!

I've discovered that it would be easier for downstream projects if the new
`SinkTaskContext.errantRecordReporter()` method were a default method that
returns null. Strictly speaking it's not required as Connect will provide
the implementation for the Connect runtime, but some downstream projects
may use their own implementations of this interface for testing purposes.
See https://issues.apache.org/jira/browse/KAFKA-10111 for details and
https://github.com/apache/kafka/pull/8814 for the suggested change. IMO
there is little harm in making the existing non-default method a default
that returns null, but please let me know if you object.

Best regards,

Randall

On Thu, May 21, 2020 at 2:10 PM Randall Hauch  wrote:

> The vote has been open for >72 hours, and the KIP is adopted with three +1
> binding votes (Konstantine, Ewen, me), one +1 non-binding vote (Andrew),
> and no -1 votes.
>
> I'll update the KIP and the AK 2.6.0 plan.
>
> Thanks, everyone.
>
> On Tue, May 19, 2020 at 4:33 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
>> +1 (binding)
>>
>> I like how the KIP looks now too. Quite active discussions within the past
>> few days, which I found very useful.
>>
>> There's some room to allow in the future the connector developers to
>> decide
>> whether they want greater control over error reporting or they want the
>> framework to keep providing the reasonable guarantees that this KIP now
>> describes. The API is expressive enough to accommodate such improvements
>> if
>> they are warranted, but its current form seems quite adequate to support
>> efficient end-to-end error reporting for sink connectors.
>>
>> Thanks for introducing this KIP Aakash!
>>
>> One last minor comment around naming:
>> Currently both the names ErrantRecordReporter and failedRecordReporter are
>> used. Using the same name everywhere seems preferable, so feel free to
>> choose the one that you prefer.
>>
>> Regards,
>> Konstantine
>>
>> On Tue, May 19, 2020 at 2:30 PM Ewen Cheslack-Postava 
>> wrote:
>>
>> > +1 (binding)
>> >
>> > This will be a nice improvement. From the discussion thread it's clear
>> this
>> > is tricky to get right, nice work!
>> >
>> > On Tue, May 19, 2020 at 8:16 AM Andrew Schofield <
>> > andrew_schofi...@live.com>
>> > wrote:
>> >
>> > > +1 (non-binding)
>> > >
>> > > This is now looking very nice.
>> > >
>> > > Andrew Schofield
>> > >
>> > > On 19/05/2020, 16:11, "Randall Hauch"  wrote:
>> > >
>> > > Thank you, Aakash, for putting together this KIP and shepherding
>> the
>> > > discussion. Also, many thanks to all those that participated in
>> the
>> > > very
>> > > active discussion. I'm actually very happy with the current
>> proposal,
>> > > am
>> > > confident that it is a valuable improvement to the Connect
>> framework,
>> > > and
>> > > know that it will be instrumental in making sink tasks easily
>> able to
>> > > report problematic records and keep running.
>> > >
>> > > +1 (binding)
>> > >
>> > > Best regards,
>> > >
>> > > Randall
>> > >
>> > > On Sun, May 17, 2020 at 6:59 PM Aakash Shah 
>> > > wrote:
>> > >
>> > > > Hello all,
>> > > >
>> > > > I'd like to open a vote for KIP-610:
>> > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
>> > > >
>> > > > Thanks,
>> > > > Aakash
>> > > >
>> > >
>> > >
>> >
>>
>


[jira] [Created] (KAFKA-10111) SinkTaskContext.errantRecordReporter() added in KIP-610 should be a default method

2020-06-05 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10111:
-

 Summary: SinkTaskContext.errantRecordReporter() added in KIP-610 
should be a default method
 Key: KAFKA-10111
 URL: https://issues.apache.org/jira/browse/KAFKA-10111
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.6.0


[KIP-610|https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors]
 added a new `errantRecordReporter()` method to `SinkTaskContext`, but the KIP 
didn't make this method a default method. While the AK project can add this 
method to all of its implementations (actual and test), other projects such as 
connector projects might have their own mock implementations just to help test 
the connector implementation. That means when those projects upgrade, they'd 
get compilation problems for their own implementations of `SinkTaskContext`.

Making this method default will save such problems with downstream projects, 
and is actually easy since the method is already defined to return null if no 
reporter is configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10110) ConnectDistributed fails with NPE when Kafka cluster has no ID

2020-06-05 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10110:
-

 Summary: ConnectDistributed fails with NPE when Kafka cluster has 
no ID
 Key: KAFKA-10110
 URL: https://issues.apache.org/jira/browse/KAFKA-10110
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.6.0


When a Connect worker starts, recent changes from KIP-606 / KAFKA-9960 attempt 
to put the Kafka cluster ID into the new KafkaMetricsContext. But the Kafka 
cluster ID can be null, resulting in an NPE shown in the following log snippet:
{noformat}
[2020-06-04 15:01:02,900] INFO Kafka cluster ID: null 
(org.apache.kafka.connect.util.ConnectUtils)
...
[2020-06-04 15:01:03,271] ERROR Stopping due to error 
(org.apache.kafka.connect.cli.ConnectDistributed)[2020-06-04 15:01:03,271] 
ERROR Stopping due to error 
(org.apache.kafka.connect.cli.ConnectDistributed)java.lang.NullPointerException 
at 
org.apache.kafka.common.metrics.KafkaMetricsContext.lambda$new$0(KafkaMetricsContext.java:48)
 at java.util.HashMap.forEach(HashMap.java:1289) at 
org.apache.kafka.common.metrics.KafkaMetricsContext.(KafkaMetricsContext.java:48)
 at 
org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:100) 
at org.apache.kafka.connect.runtime.Worker.(Worker.java:135) at 
org.apache.kafka.connect.runtime.Worker.(Worker.java:121) at 
org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:111)
 at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-30 Thread Randall Hauch
Hi, Kowshik,

Thanks for the update on KIP-584. This is listed on the "Postponed" section
of the AK 2.6.0 release plan (
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430).

Best regards,

Randall

On Fri, May 29, 2020 at 4:51 PM Kowshik Prakasam 
wrote:

> Hi Randall,
>
> We have to remove KIP-584 from the release plan, as this item will not be
> completed for 2.6 release (although KIP is accepted). We plan to include it
> in a next release.
>
>
> Cheers,
> Kowshik
>
>
> On Fri, May 29, 2020 at 11:43 AM Maulin Vasavada <
> maulin.vasav...@gmail.com>
> wrote:
>
> > Hi Randall Hauch
> >
> > Can we add KIP-519 to 2.6? It was merged to Trunk already in April -
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
> > .
> >
> > Thanks
> > Maulin
> >
> > On Fri, May 29, 2020 at 11:01 AM Randall Hauch  wrote:
> >
> > > Here's an update on the AK 2.6.0 release.
> > >
> > > Code freeze was Wednesday, and the release plan [1] has been updated to
> > > reflect all of the KIPs that made the release. We've also cut the `2.6`
> > > branch that we'll use for the release; see separate email announcing
> the
> > > new branch.
> > >
> > > The next important date for the 2.6.0 release is CODE FREEZE on JUNE
> 10,
> > > and until that date all bug fixes are still welcome on the release
> > branch.
> > > But after that, only blocker bugs can be merged to the release branch.
> > >
> > > If you have any questions or concerns, please contact me or (better
> yet)
> > > reply to this thread.
> > >
> > > Thanks, and best regards!
> > >
> > > Randall
> > >
> > > [1] AK 2.6.0 Release Plan:
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > >
> > >
> > > On Wed, May 27, 2020 at 5:53 PM Matthias J. Sax 
> > wrote:
> > >
> > > > Thanks Randall!
> > > >
> > > > I added missing KIP-594.
> > > >
> > > >
> > > > For the postponed KIP section: I removed KIP-441 and KIP-444 as both
> > are
> > > > completed.
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 5/27/20 2:31 PM, Randall Hauch wrote:
> > > > > Hey everyone, just a quick update on the 2.6.0 release.
> > > > >
> > > > > Based on the release plan (
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > > ),
> > > > > today (May 27) is feature freeze. Any major feature work that is
> not
> > > > > already complete will need to push out to the next release (either
> > 2.7
> > > or
> > > > > 3.0). There are a few PRs for KIPs that are nearing completion, and
> > > we're
> > > > > having some Jenkins build issues. I will send another email later
> > today
> > > > or
> > > > > early tomorrow with an update, and I plan to cut the release branch
> > > > shortly
> > > > > thereafter.
> > > > >
> > > > > I have also updated the list of planned KIPs on the release plan
> > page (
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > > ),
> > > > > and I've moved to the "Postponed" table any KIP that looks like it
> is
> > > not
> > > > > going to be complete today. If any KIP is in the wrong table,
> please
> > > let
> > > > me
> > > > > know.
> > > > >
> > > > > If you have any questions or concerns, please feel free to reply to
> > > this
> > > > > thread.
> > > > >
> > > > > Thanks, and best regards!
> > > > >
> > > > > Randall
> > > > >
> > > > > On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman <
> > > sop...@confluent.io
> > > > >
> > > > > wrote:
> > > > >
> > > > >> Hey Randall,
> > > > >>
> > > > >> Can you also add KIP-613 which was accepted yesterday?
> > > > >>
> > > > >> Thanks!
> > > > >> Sophie
&g

Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-30 Thread Randall Hauch
Hi, Maulin.

Thanks for pointing out that KIP-519 was already merged in April. I've
corrected the
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
page
to reflect this, and have added it to the AK 2.6.0 release plan.

Best regards,

Randall

On Fri, May 29, 2020 at 1:43 PM Maulin Vasavada 
wrote:

> Hi Randall Hauch
>
> Can we add KIP-519 to 2.6? It was merged to Trunk already in April -
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
> .
>
> Thanks
> Maulin
>
> On Fri, May 29, 2020 at 11:01 AM Randall Hauch  wrote:
>
> > Here's an update on the AK 2.6.0 release.
> >
> > Code freeze was Wednesday, and the release plan [1] has been updated to
> > reflect all of the KIPs that made the release. We've also cut the `2.6`
> > branch that we'll use for the release; see separate email announcing the
> > new branch.
> >
> > The next important date for the 2.6.0 release is CODE FREEZE on JUNE 10,
> > and until that date all bug fixes are still welcome on the release
> branch.
> > But after that, only blocker bugs can be merged to the release branch.
> >
> > If you have any questions or concerns, please contact me or (better yet)
> > reply to this thread.
> >
> > Thanks, and best regards!
> >
> > Randall
> >
> > [1] AK 2.6.0 Release Plan:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> >
> >
> > On Wed, May 27, 2020 at 5:53 PM Matthias J. Sax 
> wrote:
> >
> > > Thanks Randall!
> > >
> > > I added missing KIP-594.
> > >
> > >
> > > For the postponed KIP section: I removed KIP-441 and KIP-444 as both
> are
> > > completed.
> > >
> > >
> > > -Matthias
> > >
> > > On 5/27/20 2:31 PM, Randall Hauch wrote:
> > > > Hey everyone, just a quick update on the 2.6.0 release.
> > > >
> > > > Based on the release plan (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ),
> > > > today (May 27) is feature freeze. Any major feature work that is not
> > > > already complete will need to push out to the next release (either
> 2.7
> > or
> > > > 3.0). There are a few PRs for KIPs that are nearing completion, and
> > we're
> > > > having some Jenkins build issues. I will send another email later
> today
> > > or
> > > > early tomorrow with an update, and I plan to cut the release branch
> > > shortly
> > > > thereafter.
> > > >
> > > > I have also updated the list of planned KIPs on the release plan
> page (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > ),
> > > > and I've moved to the "Postponed" table any KIP that looks like it is
> > not
> > > > going to be complete today. If any KIP is in the wrong table, please
> > let
> > > me
> > > > know.
> > > >
> > > > If you have any questions or concerns, please feel free to reply to
> > this
> > > > thread.
> > > >
> > > > Thanks, and best regards!
> > > >
> > > > Randall
> > > >
> > > > On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman <
> > sop...@confluent.io
> > > >
> > > > wrote:
> > > >
> > > >> Hey Randall,
> > > >>
> > > >> Can you also add KIP-613 which was accepted yesterday?
> > > >>
> > > >> Thanks!
> > > >> Sophie
> > > >>
> > > >> On Wed, May 20, 2020 at 6:47 AM Randall Hauch 
> > wrote:
> > > >>
> > > >>> Hi, Tom. I saw last night that the KIP had enough votes before
> > today’s
> > > >>> deadline and I will add it to the roadmap today. Thanks for driving
> > > this!
> > > >>>
> > > >>> On Wed, May 20, 2020 at 6:18 AM Tom Bentley 
> > > wrote:
> > > >>>
> > > >>>> Hi Randall,
> > > >>>>
> > > >>>> Can we add KIP-585? (I'm not quite sure of the protocol here, but
> > > >> thought
> > > >>>> it better to ask than to just add it myself).
> > > >>>>
> > > >>>> Thanks,
> > > >>>>
> > > >>>> Tom
> > > >>>>
> > > >>>> On Tue, May 5, 2020 at 6:54 PM Randall Hauch 
> > > >> wrote:
> > > >>>>
> > > >>>>> Greetings!
> > > >>>>>
> > > >>>>> I'd like to volunteer to be release manager for the next
> time-based
> > > >>>> feature
> > > >>>>> release which will be 2.6.0. I've published a release plan at
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > >>>>> ,
> > > >>>>> and have included all of the KIPs that are currently approved or
> > > >>> actively
> > > >>>>> in discussion (though I'm happy to adjust as necessary).
> > > >>>>>
> > > >>>>> To stay on our time-based cadence, the KIP freeze is on May 20
> > with a
> > > >>>>> target release date of June 24.
> > > >>>>>
> > > >>>>> Let me know if there are any objections.
> > > >>>>>
> > > >>>>> Thanks,
> > > >>>>> Randall Hauch
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>
> > > >
> > >
> > >
> >
>


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-29 Thread Randall Hauch
Here's an update on the AK 2.6.0 release.

Code freeze was Wednesday, and the release plan [1] has been updated to
reflect all of the KIPs that made the release. We've also cut the `2.6`
branch that we'll use for the release; see separate email announcing the
new branch.

The next important date for the 2.6.0 release is CODE FREEZE on JUNE 10,
and until that date all bug fixes are still welcome on the release branch.
But after that, only blocker bugs can be merged to the release branch.

If you have any questions or concerns, please contact me or (better yet)
reply to this thread.

Thanks, and best regards!

Randall

[1] AK 2.6.0 Release Plan:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430


On Wed, May 27, 2020 at 5:53 PM Matthias J. Sax  wrote:

> Thanks Randall!
>
> I added missing KIP-594.
>
>
> For the postponed KIP section: I removed KIP-441 and KIP-444 as both are
> completed.
>
>
> -Matthias
>
> On 5/27/20 2:31 PM, Randall Hauch wrote:
> > Hey everyone, just a quick update on the 2.6.0 release.
> >
> > Based on the release plan (
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> ),
> > today (May 27) is feature freeze. Any major feature work that is not
> > already complete will need to push out to the next release (either 2.7 or
> > 3.0). There are a few PRs for KIPs that are nearing completion, and we're
> > having some Jenkins build issues. I will send another email later today
> or
> > early tomorrow with an update, and I plan to cut the release branch
> shortly
> > thereafter.
> >
> > I have also updated the list of planned KIPs on the release plan page (
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> ),
> > and I've moved to the "Postponed" table any KIP that looks like it is not
> > going to be complete today. If any KIP is in the wrong table, please let
> me
> > know.
> >
> > If you have any questions or concerns, please feel free to reply to this
> > thread.
> >
> > Thanks, and best regards!
> >
> > Randall
> >
> > On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman  >
> > wrote:
> >
> >> Hey Randall,
> >>
> >> Can you also add KIP-613 which was accepted yesterday?
> >>
> >> Thanks!
> >> Sophie
> >>
> >> On Wed, May 20, 2020 at 6:47 AM Randall Hauch  wrote:
> >>
> >>> Hi, Tom. I saw last night that the KIP had enough votes before today’s
> >>> deadline and I will add it to the roadmap today. Thanks for driving
> this!
> >>>
> >>> On Wed, May 20, 2020 at 6:18 AM Tom Bentley 
> wrote:
> >>>
> >>>> Hi Randall,
> >>>>
> >>>> Can we add KIP-585? (I'm not quite sure of the protocol here, but
> >> thought
> >>>> it better to ask than to just add it myself).
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Tom
> >>>>
> >>>> On Tue, May 5, 2020 at 6:54 PM Randall Hauch 
> >> wrote:
> >>>>
> >>>>> Greetings!
> >>>>>
> >>>>> I'd like to volunteer to be release manager for the next time-based
> >>>> feature
> >>>>> release which will be 2.6.0. I've published a release plan at
> >>>>>
> >>>>
> >>>
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> >>>>> ,
> >>>>> and have included all of the KIPs that are currently approved or
> >>> actively
> >>>>> in discussion (though I'm happy to adjust as necessary).
> >>>>>
> >>>>> To stay on our time-based cadence, the KIP freeze is on May 20 with a
> >>>>> target release date of June 24.
> >>>>>
> >>>>> Let me know if there are any objections.
> >>>>>
> >>>>> Thanks,
> >>>>> Randall Hauch
> >>>>>
> >>>>
> >>>
> >>
> >
>
>


New release branch 2.6

2020-05-28 Thread Randall Hauch
Hello Kafka developers and friends,

We now have a release branch for the 2.6 release. The branch name is "2.6"
and the version will be "2.6.0". Trunk will be shortly be bumped to the
next snapshot version 2.7.0-SNAPSHOT (
https://github.com/apache/kafka/pull/8746).

I'll be going over the JIRAs to move every non-blocker feature from this
release to the next release. If you have any questions or concerns, please
ask on the "Apache Kafka 2.6.0 release" discussion thread.

>From this point, most changes should go to trunk. However, all bug fixes
are still welcome on the release branch until the code freeze on June 10.
After that, only blocker bugs should be merged to the release branch.

Blockers (existing and new that we discover while testing the release) will
be committed to trunk and backported to the 2.6 release branch.

Please discuss with your reviewer whether your PR should go to trunk or to
trunk+release so they can merge accordingly.

As always, please help us test the release!

Thanks!
Randall Hauch


[jira] [Resolved] (KAFKA-9673) Conditionally apply SMTs

2020-05-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9673.
--
Fix Version/s: 2.6.0
 Reviewer: Konstantine Karantasis
   Resolution: Fixed

KIP-585 was approved by the 2.6.0 KIP freeze, and the PR was approved and 
merged to `trunk` before 2.6.0 feature freeze.

> Conditionally apply SMTs
> 
>
> Key: KAFKA-9673
> URL: https://issues.apache.org/jira/browse/KAFKA-9673
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 2.6.0
>
>
> KAFKA-7052 ended up using IAE with a message, rather than NPE in the case of 
> a SMT being applied to a record lacking a given field. It's still not 
> possible to apply a SMT conditionally, which is what things like Debezium 
> really need in order to apply transformations only to non-schema change 
> events.
> [~rhauch] suggested a mechanism to conditionally apply any SMT but was 
> concerned about the possibility of a naming collision (assuming it was 
> configured by a simple config)
> I'd like to propose something which would solve this problem without the 
> possibility of such collisions. The idea is to have a higher-level condition, 
> which applies an arbitrary transformation (or transformation chain) according 
> to some predicate on the record. 
> More concretely, it might be configured like this:
> {noformat}
>   transforms.conditionalExtract.type: Conditional
>   transforms.conditionalExtract.transforms: extractInt
>   transforms.conditionalExtract.transforms.extractInt.type: 
> org.apache.kafka.connect.transforms.ExtractField$Key
>   transforms.conditionalExtract.transforms.extractInt.field: c1
>   transforms.conditionalExtract.condition: topic-matches:
> {noformat}
> * The {{Conditional}} SMT is configured with its own list of transforms 
> ({{transforms.conditionalExtract.transforms}}) to apply. This would work just 
> like the top level {{transforms}} config, so subkeys can be used to configure 
> these transforms in the usual way.
> * The {{condition}} config defines the predicate for when the transforms are 
> applied to a record using a {{:}} syntax
> We could initially support three condition types:
> *{{topic-matches:}}* The transformation would be applied if the 
> record's topic name matched the given regular expression pattern. For 
> example, the following would apply the transformation on records being sent 
> to any topic with a name beginning with "my-prefix-":
> {noformat}
>transforms.conditionalExtract.condition: topic-matches:my-prefix-.*
> {noformat}
>
> *{{has-header:}}* The transformation would be applied if the 
> record had at least one header with the given name. For example, the 
> following will apply the transformation on records with at least one header 
> with the name "my-header":
> {noformat}
>transforms.conditionalExtract.condition: has-header:my-header
> {noformat}
>
> *{{not:}}* This would negate the result of another named 
> condition using the condition config prefix. For example, the following will 
> apply the transformation on records which lack any header with the name 
> my-header:
> {noformat}
>   transforms.conditionalExtract.condition: not:hasMyHeader
>   transforms.conditionalExtract.condition.hasMyHeader: 
> has-header:my-header
> {noformat}
> I foresee one implementation concern with this approach, which is that 
> currently {{Transformation}} has to return a fixed {{ConfigDef}}, and this 
> proposal would require something more flexible in order to allow the config 
> parameters to depend on the listed transform aliases (and similarly for named 
> predicate used for the {{not:}} predicate). I think this could be done by 
> adding a {{default}} method to {{Transformation}} for getting the ConfigDef 
> given the config, for example.
> Obviously this would require a KIP, but before I spend any more time on this 
> I'd be interested in your thoughts [~rhauch], [~rmoff], [~gunnar.morling].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9971) Error Reporting in Sink Connectors

2020-05-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9971.
--
Fix Version/s: 2.6.0
 Reviewer: Randall Hauch
 Assignee: Aakash Shah
   Resolution: Fixed

Merged to `trunk` for inclusion in the upcoming 2.6.0 release. This was 
approved and merged before feature freeze.

> Error Reporting in Sink Connectors
> --
>
> Key: KAFKA-9971
> URL: https://issues.apache.org/jira/browse/KAFKA-9971
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Aakash Shah
>Assignee: Aakash Shah
>Priority: Critical
> Fix For: 2.6.0
>
>
> Currently, 
> [KIP-298|https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect]
>  provides error handling in Kafka Connect that includes functionality such as 
> retrying, logging, and sending errant records to a dead letter queue. 
> However, the dead letter queue functionality from KIP-298 only supports error 
> reporting within contexts of the transform operation, and key, value, and 
> header converter operation. Within the context of the {{put(...)}} method in 
> sink connectors, there is no support for dead letter queue/error reporting 
> functionality. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9960) Metrics Reporter should support additional context tags

2020-05-27 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9960.
--
Resolution: Fixed

Merged the PR to the `trunk` branch for inclusion in the AK 2.6.0 release.

> Metrics Reporter should support additional context tags
> ---
>
> Key: KAFKA-9960
> URL: https://issues.apache.org/jira/browse/KAFKA-9960
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
> Fix For: 2.6.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-606%3A+Add+Metadata+Context+to+MetricsReporter
> MetricsReporters often rely on additional context that is currently hard to 
> access or propagate through an application. The KIP linked above proposes to 
> address those shortcomings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.0 release

2020-05-27 Thread Randall Hauch
Hey everyone, just a quick update on the 2.6.0 release.

Based on the release plan (
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430),
today (May 27) is feature freeze. Any major feature work that is not
already complete will need to push out to the next release (either 2.7 or
3.0). There are a few PRs for KIPs that are nearing completion, and we're
having some Jenkins build issues. I will send another email later today or
early tomorrow with an update, and I plan to cut the release branch shortly
thereafter.

I have also updated the list of planned KIPs on the release plan page (
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430),
and I've moved to the "Postponed" table any KIP that looks like it is not
going to be complete today. If any KIP is in the wrong table, please let me
know.

If you have any questions or concerns, please feel free to reply to this
thread.

Thanks, and best regards!

Randall

On Wed, May 20, 2020 at 2:16 PM Sophie Blee-Goldman 
wrote:

> Hey Randall,
>
> Can you also add KIP-613 which was accepted yesterday?
>
> Thanks!
> Sophie
>
> On Wed, May 20, 2020 at 6:47 AM Randall Hauch  wrote:
>
> > Hi, Tom. I saw last night that the KIP had enough votes before today’s
> > deadline and I will add it to the roadmap today. Thanks for driving this!
> >
> > On Wed, May 20, 2020 at 6:18 AM Tom Bentley  wrote:
> >
> > > Hi Randall,
> > >
> > > Can we add KIP-585? (I'm not quite sure of the protocol here, but
> thought
> > > it better to ask than to just add it myself).
> > >
> > > Thanks,
> > >
> > > Tom
> > >
> > > On Tue, May 5, 2020 at 6:54 PM Randall Hauch 
> wrote:
> > >
> > > > Greetings!
> > > >
> > > > I'd like to volunteer to be release manager for the next time-based
> > > feature
> > > > release which will be 2.6.0. I've published a release plan at
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=152113430
> > > > ,
> > > > and have included all of the KIPs that are currently approved or
> > actively
> > > > in discussion (though I'm happy to adjust as necessary).
> > > >
> > > > To stay on our time-based cadence, the KIP freeze is on May 20 with a
> > > > target release date of June 24.
> > > >
> > > > Let me know if there are any objections.
> > > >
> > > > Thanks,
> > > > Randall Hauch
> > > >
> > >
> >
>


<    1   2   3   4   5   6   7   8   >