[jira] [Created] (KAFKA-15376) Revisit removing data earlier to the current leader for topics enabled with tiered storage.

2023-08-17 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-15376:
--

 Summary: Revisit removing data earlier to the current leader for 
topics enabled with tiered storage.
 Key: KAFKA-15376
 URL: https://issues.apache.org/jira/browse/KAFKA-15376
 Project: Kafka
  Issue Type: Task
  Components: core
Reporter: Satish Duggana


Followup on the discussion thread:

[https://github.com/apache/kafka/pull/13561#discussion_r1288778006]

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2115

2023-08-17 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-910: Update Source offsets for Source Connectors without producing records

2023-08-17 Thread Sagar
Hi All,

Bumping the voting thread again.

Thanks!
Sagar.

On Wed, Aug 2, 2023 at 4:43 PM Sagar  wrote:

> Attaching the KIP link for reference:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-910%3A+Update+Source+offsets+for+Source+Connectors+without+producing+records
>
> Thanks!
> Sagar.
>
> On Wed, Aug 2, 2023 at 4:37 PM Sagar  wrote:
>
>> Hi All,
>>
>> Calling a Vote on KIP-910 [1]. I feel we have converged to a reasonable
>> design. Ofcourse I am open to any feedback/suggestions and would address
>> them.
>>
>> Thanks!
>> Sagar.
>>
>


[jira] [Resolved] (KAFKA-15345) KRaft leader should notify the listener only when it has read up to the leader's epoch

2023-08-17 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Armando García Sancio resolved KAFKA-15345.

Resolution: Fixed

> KRaft leader should notify the listener only when it has read up to the 
> leader's epoch
> --
>
> Key: KAFKA-15345
> URL: https://issues.apache.org/jira/browse/KAFKA-15345
> Project: Kafka
>  Issue Type: Bug
>Reporter: José Armando García Sancio
>Assignee: José Armando García Sancio
>Priority: Major
> Fix For: 3.6.0
>
>
> In a non-empty log the KRaft leader only notifies the listener of leadership 
> when it has read to the leader's epoch start offset. This guarantees that the 
> leader epoch has been committed and that the listener has read all committed 
> offset/records.
> Unfortunately, the KRaft leader doesn't do this when the log is empty. When 
> the log is empty the listener is notified immediately when it has become 
> leader. This makes the API inconsistent and harder to program against.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2114

2023-08-17 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2113

2023-08-17 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 404497 lines...]

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateChangelogPartitions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateChangelogPartitions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateState() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateState() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateIsActive() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateIsActive() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateId() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateId() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateInputPartitions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateInputPartitions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateCommitRequested() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateCommitRequested() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateNeedsInitializationOrRestoration() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > ReadOnlyTaskTest > 
shouldDelegateNeedsInitializationOrRestoration() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldThrowIfAddTaskActionIsCreatedWithNullTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldThrowIfAddTaskActionIsCreatedWithNullTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldCreateAddTaskAction() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldCreateAddTaskAction() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldCreateRemoveTaskAction() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldCreateRemoveTaskAction() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldThrowIfRemoveTaskActionIsCreatedWithNullTaskId() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TaskAndActionTest > 
shouldThrowIfRemoveTaskActionIsCreatedWithNullTaskId() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
onlyRemovePendingTaskToSuspendShouldRemoveTaskFromPendingUpdateActions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
onlyRemovePendingTaskToSuspendShouldRemoveTaskFromPendingUpdateActions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldOnlyKeepLastUpdateAction() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldOnlyKeepLastUpdateAction() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldAddAndRemovePendingTaskToRecycle() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldAddAndRemovePendingTaskToRecycle() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
onlyRemovePendingTaskToUpdateInputPartitionsShouldRemoveTaskFromPendingUpdateActions()
 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
onlyRemovePendingTaskToUpdateInputPartitionsShouldRemoveTaskFromPendingUpdateActions()
 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldVerifyIfPendingTaskToRecycleExist() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldVerifyIfPendingTaskToRecycleExist() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldAddAndRemovePendingTaskToUpdateInputPartitions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldAddAndRemovePendingTaskToUpdateInputPartitions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 
shouldAddAndRemovePendingTaskToSuspend() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 85 > TasksTest > 

Re: [kafka] branch trunk updated: MINOR: Do not reuse admin client across tests (#14225)

2023-08-17 Thread Girish L
how to unsubscribe? from these emails ..

On Thu, Aug 17, 2023 at 11:24 PM  wrote:

> This is an automated email from the ASF dual-hosted git repository.
>
> mjsax pushed a commit to branch trunk
> in repository https://gitbox.apache.org/repos/asf/kafka.git
>
>
> The following commit(s) were added to refs/heads/trunk by this push:
>  new d85a7008133 MINOR: Do not reuse admin client across tests (#14225)
> d85a7008133 is described below
>
> commit d85a70081333a2ab9dd6593e99abf213a469ba2d
> Author: Lucas Brutschy 
> AuthorDate: Thu Aug 17 19:53:58 2023 +0200
>
> MINOR: Do not reuse admin client across tests (#14225)
>
> Reusing an admin client across tests can cause false positives in leak
> checkers, so don't do it.
>
> Reviewers: Divij Vaidya , Matthias J. Sax <
> matth...@confluent.io>
> ---
>  .../integration/AbstractResetIntegrationTest.java | 19
> ++-
>  1 file changed, 6 insertions(+), 13 deletions(-)
>
> diff --git
> a/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
> b/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
> index 4bd3515782a..05b119da064 100644
> ---
> a/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
> +++
> b/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
> @@ -16,6 +16,7 @@
>   */
>  package org.apache.kafka.streams.integration;
>
> +import org.apache.kafka.common.utils.Utils;
>  import org.apache.kafka.tools.StreamsResetter;
>  import org.apache.kafka.clients.CommonClientConfigs;
>  import org.apache.kafka.clients.admin.Admin;
> @@ -42,7 +43,6 @@ import org.apache.kafka.streams.kstream.Produced;
>  import org.apache.kafka.streams.kstream.TimeWindows;
>  import org.apache.kafka.test.IntegrationTest;
>  import org.apache.kafka.test.TestUtils;
> -import org.junit.AfterClass;
>  import org.junit.Assert;
>  import org.junit.Rule;
>  import org.junit.Test;
> @@ -54,7 +54,6 @@ import org.junit.rules.Timeout;
>  import java.io.BufferedWriter;
>  import java.io.File;
>  import java.io.FileWriter;
> -import java.time.Duration;
>  import java.util.ArrayList;
>  import java.util.Arrays;
>  import java.util.Collections;
> @@ -84,14 +83,6 @@ public abstract class AbstractResetIntegrationTest {
>  @Rule
>  public final TestName testName = new TestName();
>
> -@AfterClass
> -public static void afterClassCleanup() {
> -if (adminClient != null) {
> -adminClient.close(Duration.ofSeconds(10));
> -adminClient = null;
> -}
> -}
> -
>  protected Properties commonClientConfig;
>  protected Properties streamsConfig;
>  private Properties producerConfig;
> @@ -186,10 +177,12 @@ public abstract class AbstractResetIntegrationTest {
>  }
>
>  void cleanupTest() throws Exception {
> -if (streams != null) {
> -streams.close(Duration.ofSeconds(30));
> -}
> +Utils.closeQuietly(streams, "kafka streams");
>  IntegrationTestUtils.purgeLocalStreamsState(streamsConfig);
> +if (adminClient != null) {
> +Utils.closeQuietly(adminClient, "admin client");
> +adminClient = null;
> +}
>  }
>
>  private void add10InputElements() {
>
>


[jira] [Created] (KAFKA-15375) When running in KRaft mode, LogManager may creates CleanShutdown file by mistake

2023-08-17 Thread Vincent Jiang (Jira)
Vincent Jiang created KAFKA-15375:
-

 Summary: When running in KRaft mode, LogManager may creates 
CleanShutdown file by mistake 
 Key: KAFKA-15375
 URL: https://issues.apache.org/jira/browse/KAFKA-15375
 Project: Kafka
  Issue Type: Bug
Reporter: Vincent Jiang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15374) ZK migration fails on configs for default broker resource

2023-08-17 Thread David Arthur (Jira)
David Arthur created KAFKA-15374:


 Summary: ZK migration fails on configs for default broker resource
 Key: KAFKA-15374
 URL: https://issues.apache.org/jira/browse/KAFKA-15374
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.5.1, 3.4.1
Reporter: David Arthur
 Fix For: 3.6.0, 3.4.2, 3.5.2


This error was seen while performing a ZK to KRaft migration on a cluster with 
configs for the default broker resource

 
{code:java}
java.lang.NumberFormatException: For input string: ""
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67)
at java.base/java.lang.Integer.parseInt(Integer.java:678)
at java.base/java.lang.Integer.valueOf(Integer.java:999)
at 
kafka.zk.ZkMigrationClient.$anonfun$migrateBrokerConfigs$2(ZkMigrationClient.scala:371)
at 
kafka.zk.migration.ZkConfigMigrationClient.$anonfun$iterateBrokerConfigs$1(ZkConfigMigrationClient.scala:174)
at 
kafka.zk.migration.ZkConfigMigrationClient.$anonfun$iterateBrokerConfigs$1$adapted(ZkConfigMigrationClient.scala:156)
at 
scala.collection.immutable.BitmapIndexedMapNode.foreach(HashMap.scala:1076)
at scala.collection.immutable.HashMap.foreach(HashMap.scala:1083)
at 
kafka.zk.migration.ZkConfigMigrationClient.iterateBrokerConfigs(ZkConfigMigrationClient.scala:156)
at 
kafka.zk.ZkMigrationClient.migrateBrokerConfigs(ZkMigrationClient.scala:370)
at 
kafka.zk.ZkMigrationClient.cleanAndMigrateAllMetadata(ZkMigrationClient.scala:530)
at 
org.apache.kafka.metadata.migration.KRaftMigrationDriver$MigrateMetadataEvent.run(KRaftMigrationDriver.java:618)
at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
at java.base/java.lang.Thread.run(Thread.java:833)
at org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:64) 
{code}
 

This is due to not considering the default resource type when we collect the 
broker IDs in ZkMigrationClient#migrateBrokerConfigs.

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2112

2023-08-17 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15373) AdminClient#describeTopics should not throw InvalidTopicException if topic ID is not found

2023-08-17 Thread Michael Edgar (Jira)
Michael Edgar created KAFKA-15373:
-

 Summary: AdminClient#describeTopics should not throw 
InvalidTopicException if topic ID is not found
 Key: KAFKA-15373
 URL: https://issues.apache.org/jira/browse/KAFKA-15373
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.5.1
Reporter: Michael Edgar


Similar to KAFKA-7808.

In {{KafkaAdminClient#handleDescribeTopicsByIds}}, when the topic is not found 
by ID, an {{InvalidTopicException}} is thrown.

{code:java}
String topicName = cluster.topicName(topicId);
if (topicName == null) {
future.completeExceptionally(new InvalidTopicException("TopicId " + topicId 
+ " not found."));
continue;
}
{code}

It would be better to use an {{UnknownTopicIdException}} in this case, which 
better aligns to the use of {{UnknownTopicOrPartitionException}} for the same 
scenario when describing topics by name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15372) MM2 rolling restart can drop configuration changes silently

2023-08-17 Thread Daniel Urban (Jira)
Daniel Urban created KAFKA-15372:


 Summary: MM2 rolling restart can drop configuration changes 
silently
 Key: KAFKA-15372
 URL: https://issues.apache.org/jira/browse/KAFKA-15372
 Project: Kafka
  Issue Type: Improvement
  Components: mirrormaker
Reporter: Daniel Urban


When MM2 is restarted, it tries to update the Connector configuration in all 
flows. This is a one-time trial, and fails if the Connect worker is not the 
leader of the group.

In a distributed setup and with a rolling restart, it is possible that for a 
specific flow, the Connect worker of the just restarted MM2 instance is not the 
leader, meaning that Connector configurations can get dropped.

For example, assuming 2 MM2 instances, and one flow A->B:
 # MM2 instance 1 is restarted, the worker inside MM2 instance 2 becomes the 
leader of A->B Connect group.
 # MM2 instance 1 tries to update the Connector configurations, but fails 
(instance 2 has the leader, not instance 1)
 # MM2 instance 2 is restarted, leadership moves to worker in MM2 instance 1
 # MM2 instance 2 tries to update the Connector configurations, but fails

At this point, the configuration changes before the restart are never applied. 
Many times, this can also happen silently, without any indication.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[REVIEW REQUEST] Move ReassignPartitionsCommandArgsTest to java

2023-08-17 Thread Николай Ижиков
Hello.

I’m working on [1].
The goal of ticket is to rewire `ReassignPartitionCommand` in java.

The PR that moves whole command is pretty big so it makes sense to split it.
I prepared the PR [2] that moves single test 
(ReassignPartitionsCommandArgsTest) to java.

It relatively small and simple(touches only 3 files):

To review - https://github.com/apache/kafka/pull/14217
Big PR  - https://github.com/apache/kafka/pull/13247

Please, review.

[1] https://issues.apache.org/jira/browse/KAFKA-14595
[2] https://github.com/apache/kafka/pull/14217

Re: [DISCUSS] KIP-960: Support interactive queries (IQv2) for versioned state stores

2023-08-17 Thread Alieh Saeedi
Hey Matthias,
thanks for the feedback

I think if one materializes a versioned store, then the query is posed to
the versioned state store. So the type of materialized store determines the
type of store and consequently all the classes for running the query (for
example, MeteredVersionedKeyValueStore instead of MeteredKeyValueStore and
so on). I added the piece of code for defining the versioned state store to
the example part of the KIP-960.

About the generics, using VersionedRecord instead of V worked. Right
now, I am composing the integration tests. Let me complete the code and
confirm it for 100%.

About the KeyQuery class, I thought the KIP must contain just the newly
added stuff. OK, I will paste the whole class in KIP-960.

Thanks,
Alieh




On Thu, Aug 17, 2023 at 3:54 AM Matthias J. Sax  wrote:

> Thanks for updating the KIP and splitting into multiple ones. I am just
> going to reply for the single-key-single-timestamp case below.
>
> It seems the `KeyQuery.java` code snipped is "incomplete" -- the class
> definition is missing.
>
> At the same time, the example uses `VersionedKeyQuery` so I am not sure
> right now if you propose to re-use the existing `KeyQuery` class or
> introduce a new `VersionedKeyQuery` class?
>
> While it was suggested that we re-use the existing `KeyQuery` class, I
> am wondering what would happen if one uses the new `asOf` method, and
> passes the query into a non-versioned store?
>
> In the end, a non-versioned store does not know that there is an as-of
> timestamp set and thus might just do a plain lookup (it also only has a
> single value per key) and return whatever value it has stored?
>
> I am wondering if this would be semantically questionable and/or
> confusing for users (especially for timestamped stores)? -- Because the
> non-versioned store does not know anything about the timestamp, it can
> also not even check if it's set and raise an error.
>
>
> Did you try to prototype any of both approaches? Asking because I am
> wondering about generics and return types? Existing `KeyQuery` is defined
> as
>
> `KeyQuery extends Query` so `V` is the result type.
>
> However for the versioned-store we want the result type to be
> `VersionedRecord` and thus we would need to set `V =
> VersionedRecord` -- would this work or would the compiler tip over it
> (or would it work but still be confusing/complex for users to specify
> the right types)?
>
> For `VersionedKeyQuery` we could do:
>
> `VersionedKeyQuery extends Query>`
>
> what seems cleaner?
>
> Without writing code I always have a hard time to reason about generics,
> so maybe trying out both approaches might shed some light?
>
>
>
>
> -Matthias
>
>
> On 8/15/23 9:03 AM, Alieh Saeedi wrote:
> > Hi all,
> > thanks to all for the great points you mentioned.
> >
> > Addressed reviews are listed as follows:
> > 1. The methods are defined as composable, as Lucas suggested. Now we have
> > even more types of single-key_multi-timestamp queries. As Matthias
> > suggested in his first review, now with composable methods, queries with
> a
> > lower time bound are also possible. The meaningless combinations are
> > prevented by throwing exceptions.
> > 2. I corrected and replaced asOf everywhere instead of until. I hope the
> > javadocs and the explanations in the KIPs are clear enough about the time
> > range. Matthias, Lucas, and Victoria asked about the exact time
> boundaries.
> > I assumed that if the time range is specified as [t1, t2], all the
> records
> > that have been inserted within this time range must be returned by the
> > query. But I think the point that all of you referred to and that
> Victoria
> > clarified very well is valid. Maybe the query must return "all the
> > records that are valid within the time range". Therefore, records that
> have
> > been inserted before t1 are also retuned. Now, this makes more sense to
> me
> > as a user. By the way, it seems more like a product question.
> > 3. About the order of retuned records, I added some boolean fields to the
> > classes to specify them. I still do not have any clue how hard the
> > implementation of this will be. The question is, is the order considered
> > for normal range queries as well?
> > 4. As Victoria pointed out the issue about listing tombstones, I changed
> > the VersionedRecord such that it can have NULL values as well. The
> question
> > is, what was the initial intention of setting the value in
> VersionedRecord
> > as NOT NULL? I am worried about breaking other parts of the code.
> > 5. About the motivation for defining the VersionedKeyQuery and
> > VersionedRangeQuery classes: I think my initial intention was to
> > distinguish between queries that return a single record and queries that
> > return a set of records. On the other hand, I put both
> > single-key_single-timestamp queries and single-key_multi-timestamp
> queries
> > in the same class, VersionedKeyQuery. Matthias complained about it as
> well.
> > Therefore, in my new 

Re: [DISCUSSION] KIP-965: Support disaster recovery between clusters by MirrorMaker

2023-08-17 Thread hudeqi
Thanks your feedback! Fomenko. If there is no point of discussion in this KIP? 
I'm going to initiate the voting process next week. grateful.

best,
hudeqi


 -原始邮件-
 发件人: "Igor Fomenko" 
 发送时间: 2023-08-14 21:30:59 (星期一)
 收件人: dev@kafka.apache.org
 抄送: 
 主题: Re: [DISCUSSION] KIP-965: Support disaster recovery between clusters 
by MirrorMaker
 


Mailing list threading improvements

2023-08-17 Thread Christofer Dutz
TL;DR: We’re updating how auto-generated email from Github will be
threaded on your mailing lists. If you want to keep the old defaults,
details are below.

We’re pleased to let you know that we’re tweaking the way that auto-
generated email from Github will appear on your mailing lists. This
will lead to more human-readable subject lines, and the ability of most
modern mail clients to correctly thread discussions originating on
Github.

Background: Many project mailing lists receive email auto-generated by
Github. The way that the subject lines are crafted leads to messages
from the same topic not being threaded together by most mail clients.
We’re fixing that.

The way that these messages are threaded is defined by a file -
.asf.yml - in your git repositories. We’re changing the way that it
will work by default if you don’t choose settings. If you’re happy for
us to make this change, don’t do anything - the change will happen on
October the 1st 2023.

Details of the current default, as well as the proposed changes, are on
the following page, along with instructions on how to keep your current
settings, if you prefer:

https://community.apache.org/contributors/mailing-lists.html#configuring-the-subject-lines-of-the-emails-being-sent

Please copy d...@community.apache.org
on any feedback.

Chris, on behalf of the Comdev PMC


[DISCUSS] KIP-939: Support Participation in 2PC

2023-08-17 Thread Artem Livshits
Hello,

This is a discussion thread for
https://cwiki.apache.org/confluence/display/KAFKA/KIP-939%3A+Support+Participation+in+2PC
.

The KIP proposes extending Kafka transaction support (that already uses 2PC
under the hood) to enable atomicity of dual writes to Kafka and an external
database, and helps to fix a long standing Flink issue.

An example of code that uses the dual write recipe with JDBC and should
work for most SQL databases is here
https://github.com/apache/kafka/pull/14231.

The FLIP for the sister fix in Flink is here
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=255071710

-Artem