Re: [DISCUSS] Apache Kafka 3.4.0 release

2022-10-05 Thread David Jacot
+1. Thanks, Sophie!

Le mer. 5 oct. 2022 à 19:57, Luke Chen  a écrit :

> Hi Sophie,
>
> Thanks for volunteering!
>
> Luke
>
> On Thu, Oct 6, 2022 at 6:17 AM José Armando García Sancio
>  wrote:
>
> > Thanks for volunteering Sophie.
> >
> > On Wed, Oct 5, 2022 at 3:01 PM Sophie Blee-Goldman
> >  wrote:
> > >
> > > Hey all,
> > >
> > > I'd like to volunteer as release manager for the next feature release,
> > > which will be Apache
> > > Kafka 3.4.0. If that sounds good to everyone I'll update this thread
> with
> > > the release plan in the coming week.
> > >
> > > Cheers,
> > > A. Sophie Blee-Goldman
> >
> >
> >
> > --
> > -José
> >
>


Re: [DISCUSS] Apache Kafka 3.4.0 release

2022-10-05 Thread Luke Chen
Hi Sophie,

Thanks for volunteering!

Luke

On Thu, Oct 6, 2022 at 6:17 AM José Armando García Sancio
 wrote:

> Thanks for volunteering Sophie.
>
> On Wed, Oct 5, 2022 at 3:01 PM Sophie Blee-Goldman
>  wrote:
> >
> > Hey all,
> >
> > I'd like to volunteer as release manager for the next feature release,
> > which will be Apache
> > Kafka 3.4.0. If that sounds good to everyone I'll update this thread with
> > the release plan in the coming week.
> >
> > Cheers,
> > A. Sophie Blee-Goldman
>
>
>
> --
> -José
>


Re: [DISCUSS] Apache Kafka 3.4.0 release

2022-10-05 Thread José Armando García Sancio
Thanks for volunteering Sophie.

On Wed, Oct 5, 2022 at 3:01 PM Sophie Blee-Goldman
 wrote:
>
> Hey all,
>
> I'd like to volunteer as release manager for the next feature release,
> which will be Apache
> Kafka 3.4.0. If that sounds good to everyone I'll update this thread with
> the release plan in the coming week.
>
> Cheers,
> A. Sophie Blee-Goldman



-- 
-José


[DISCUSS] Apache Kafka 3.4.0 release

2022-10-05 Thread Sophie Blee-Goldman
Hey all,

I'd like to volunteer as release manager for the next feature release,
which will be Apache
Kafka 3.4.0. If that sounds good to everyone I'll update this thread with
the release plan in the coming week.

Cheers,
A. Sophie Blee-Goldman


[GitHub] [kafka-site] jsancio merged pull request #455: MINOR; Document the 3.3.1 Release

2022-10-05 Thread GitBox


jsancio merged PR #455:
URL: https://github.com/apache/kafka-site/pull/455


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] mumrah commented on a diff in pull request #455: MINOR; Document the 3.3.1 Release

2022-10-05 Thread GitBox


mumrah commented on code in PR #455:
URL: https://github.com/apache/kafka-site/pull/455#discussion_r985252020


##
downloads.html:
##
@@ -6,12 +6,56 @@

 Download
 
-3.2.3 is the latest release. The current stable version is 3.2.3.
+3.3.1 is the latest release. The current stable version is 3.3.1.
 
 
 You can verify your download by following these https://www.apache.org/info/verification.html;>procedures and using 
these https://downloads.apache.org/kafka/KEYS;>KEYS.
 
 
+
+3.3.1
+
+
+Released October 3, 2022
+
+
+https://downloads.apache.org/kafka/3.3.1/RELEASE_NOTES.html;>3.3.1 
and https://archive.apache.org/dist/kafka/3.3.0/RELEASE_NOTES.html;>3.3.0 
Release Notes
+
+
+Source download: https://downloads.apache.org/kafka/3.3.1/kafka-3.3.1-src.tgz;>kafka-3.3.1-src.tgz
 (https://downloads.apache.org/kafka/3.3.1/kafka-3.3.1-src.tgz.asc;>asc,
 https://downloads.apache.org/kafka/3.3.1/kafka-3.3.1-src.tgz.sha512;>sha512)
+
+
+Binary downloads:
+
+Scala 2.12 - https://downloads.apache.org/kafka/3.3.1/kafka_2.12-3.3.1.tgz;>kafka_2.12-3.3.1.tgz
 (https://downloads.apache.org/kafka/3.3.1/kafka_2.12-3.3.1.tgz.asc;>asc,
 https://downloads.apache.org/kafka/3.3.1/kafka_2.12-3.3.1.tgz.sha512;>sha512)
+Scala 2.13 - https://downloads.apache.org/kafka/3.3.1/kafka_2.13-3.3.1.tgz;>kafka_2.13-3.3.1.tgz
 (https://downloads.apache.org/kafka/3.3.1/kafka_2.13-3.3.1.tgz.asc;>asc,
 https://downloads.apache.org/kafka/3.3.1/kafka_2.13-3.3.1.tgz.sha512;>sha512)
+
+We build for multiple versions of Scala. This only matters if you 
are using Scala and you want a version
+built for the same Scala version you use. Otherwise any version 
should work (2.13 is recommended).
+
+
+
+
+Kafka 3.3.1 includes a number of significant new features. Here is a 
summary of some notable changes:
+
+
+
+KIP-833: Mark KRaft as Production Ready
+KIP-778: KRaft to KRaft upgrades
+KIP-835: Monitor KRaft Controller Quorum health
+KIP-794: Strictly Uniform Sticky Partitioner
+KIP-834: Pause/resume KafkaStreams topologies
+KIP-618: Exactly-Once support for source connectors
+
+
+
+For more information, please read the detailed https://downloads.apache.org/kafka/3.3.1/RELEASE_NOTES.html;>3.3.1 
and https://archive.apache.org/dist/kafka/3.3.0/RELEASE_NOTES.html;>3.3.0 
Release Notes.
+
+
+
+3.3.0
+A significant bug was found in the 3.3.0 release after artifacts were 
pushed to Apache and Maven central but prior to the release announcement. As a 
result the decision was taken to not announce 3.3.0 and release 3.3.1 with the 
fix. It is recommended that 3.3.0 not be used.

Review Comment:
   ```suggestion
   A significant bug was found in the 3.3.0 release after artifacts were 
pushed to Apache and Maven central but prior to the release announcement. As a 
result, the decision was made to not announce 3.3.0 and instead release 3.3.1 
with the fix. It is recommended that 3.3.0 not be used.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Connector API callbacks for create/delete events

2022-10-05 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi,

We've some custom connectors that require provisioning external resources 
(think of creating queues, S3 buckets, or activating accounts) when the 
connector instance is created, but also need to cleanup these resources 
(delete, deactivate) when the connector instance is deleted.

The connector API (org.apache.kafka.connect.connector.Connector) provides a 
start() and stop() methods and, while we can probably work around the start() 
method to check if the initialization of external resources has been done, 
there is currently no hook that a connector can use to perform any cleanup task 
when it is deleted. 

I'm planning to write a KIP that enhances the Connector API by having methods 
that are invoked by the Herder when connectors are created and/or deleted; but 
before doing so, I wanted to ask the community if there's already some 
workaround(s) that we can be used to achieve these tasks.

Thank you!

Re: [DISCUSS] KIP-866 ZooKeeper to KRaft Migration

2022-10-05 Thread Mickael Maison
Hi David,

Thanks for starting this important KIP.

I've just taken a quick look so far but I've got a couple of initial questions:

1) What happens if a non KRaft compatible broker (or with
kafka.metadata.migration.enable set to false) joins the cluster after
the migration is triggered?

2) In the Failure Modes section you mention a scenario where a write
to ZK fails. What happens when the divergence limit is reached? Is
this a fatal condition? How much divergence should we allow?

Thanks,
Mickael

On Wed, Oct 5, 2022 at 12:20 AM David Arthur  wrote:
>
> Hey folks, I wanted to get the ball rolling on the discussion for the
> ZooKeeper migration KIP. This KIP details how we plan to do an online
> migration of metadata from ZooKeeper to KRaft as well as a rolling
> upgrade of brokers to KRaft mode.
>
> The general idea is to keep KRaft and ZooKeeper in sync during the
> migration, so both types of brokers can exist simultaneously. Then,
> once everything is migrated and updated, we can turn off ZooKeeper
> writes.
>
> This is a pretty complex KIP, so please take a look :)
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-866+ZooKeeper+to+KRaft+Migration
>
> Thanks!
> David


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1274

2022-10-05 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14281) Multi-level rack awareness

2022-10-05 Thread Viktor Somogyi-Vass (Jira)
Viktor Somogyi-Vass created KAFKA-14281:
---

 Summary: Multi-level rack awareness
 Key: KAFKA-14281
 URL: https://issues.apache.org/jira/browse/KAFKA-14281
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 3.4.0
Reporter: Viktor Somogyi-Vass
Assignee: Viktor Somogyi-Vass


h1. Motivation

With replication services data can be replicated across independent Kafka 
clusters in multiple data center. In addition, many customers need "stretch 
clusters" - a single Kafka cluster that spans across multiple data centers. 
This architecture has the following useful characteristics:
 - Data is natively replicated into all data centers by Kafka topic replication.
 - No data is lost when 1 DC is lost and no configuration change is required - 
design is implicitly relying on native Kafka replication.
 - From operational point of view, it is much easier to configure and operate 
such a topology than a replication scenario via MM2.

Kafka should provide "native" support for stretch clusters, covering any 
special aspects of operations of stretch cluster.

h2. Multi-level rack awareness

Additionally, stretch clusters are implemented using the rack awareness 
feature, where each DC is represented as a rack. This ensures that replicas are 
spread across DCs evenly. Unfortunately, there are cases where this is too 
limiting - in case there are actual racks inside the DCs, we cannot specify 
those. Consider having 3 DCs with 2 racks each:

/DC1/R1, /DC1/R2
/DC2/R1, /DC2/R2
/DC3/R1, /DC3/R2

If we were to use racks as DC1, DC2, DC3, we lose the rack-level information of 
the setup. This means that it is possible that when we are using RF=6, that the 
2 replicas assigned to DC1 will both end up in the same rack.

If we were to use racks as /DC1/R1, /DC1/R2, etc, then when using RF=3, it is 
possible that 2 replicas end up in the same DC, e.g. /DC1/R1, /DC1/R2, /DC2/R1.

Because of this, Kafka should support "multi-level" racks, which means that 
rack IDs should be able to describe some kind of a hierarchy. With this 
feature, brokers should be able to:
 # spread replicas evenly based on the top level of the hierarchy (i.e. first, 
between DCs)
 # then inside a top-level unit (DC), if there are multiple replicas, they 
should be spread evenly among lower-level units (i.e. between racks, then 
between physical hosts, and so on)
 ## repeat for all levels



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] KIP-874: TopicRoundRobinAssignor

2022-10-05 Thread Mathieu Amblard
Hi Kafka Developers,

My proposal is to add a new partition assignment strategy at the topic
level to :
 - have a better data consistency by consumed topic in case of exception
 - have a solution much thread safe for the consumer
In case there are multiple consumers and multiple topics.

Here is the link to the KIP with all the explanations :
https://cwiki.apache.org/confluence/x/XozGDQ

Thank you in advance for your feedbacks,
Mathieu