[jira] [Created] (KAFKA-14144) AlterPartition is not idempotent when requests time out

2022-08-04 Thread David Mao (Jira)
David Mao created KAFKA-14144:
-

 Summary: AlterPartition is not idempotent when requests time out
 Key: KAFKA-14144
 URL: https://issues.apache.org/jira/browse/KAFKA-14144
 Project: Kafka
  Issue Type: Bug
Reporter: David Mao


[https://github.com/apache/kafka/pull/12032] changed the validation order of 
AlterPartition requests to fence requests with a stale partition epoch before 
we compare the leader and ISR contents.

This results in a loss of idempotency if a leader does not receive an 
AlterPartition response because retries will receive an INVALID_UPDATE_VERSION 
error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1121

2022-08-04 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 504100 lines...]
[2022-08-04T21:58:52.455Z] > Task :connect:api:testSrcJar
[2022-08-04T21:58:52.455Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2022-08-04T21:58:52.455Z] > Task :connect:api:publishToMavenLocal
[2022-08-04T21:58:52.455Z] 
[2022-08-04T21:58:52.455Z] > Task :streams:javadoc
[2022-08-04T21:58:52.455Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/processor/StreamPartitioner.java:50:
 warning - Tag @link: reference not found: 
org.apache.kafka.clients.producer.internals.DefaultPartitioner
[2022-08-04T21:58:53.407Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:53.407Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:53.407Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:53.407Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:854:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:53.407Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:53.407Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:53.407Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:54.359Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:54.359Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:54.359Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:54.359Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:54.359Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-04T21:58:55.922Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2022-08-04T21:58:55.922Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2022-08-04T21:58:55.922Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
[2022-08-04T21:58:55.922Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2022-08-04T21:58:55.922Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-08-04T21:58:55.922Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:109:
 warning - Tag @link: reference not found: this#getResult()
[2022-08-04T21:58:55.922Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk_2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureReason()
[2022-08-04T21:58:55.922Z] 

Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Thu, Aug 4, 2022 at 2:33 PM Niket Goel  wrote:
> I would like to request adding KIP-859 [1] to the 3.3.0 release. The KIP
> adds some important metrics to allow visibility into KRaft log processing
> related errors.

Thanks Niket. Adding these metrics to the 3.3.0 release should be okay
since they are low risk and important for monitoring the health and
availability of a KRaft cluster.

I have updated the 3.3.0 release page.

-- 
-José


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread Niket Goel
(hit send too soon)
KIP --
https://cwiki.apache.org/confluence/display/KAFKA/KIP-859%3A+Add+Metadata+Log+Processing+Error+Related+Metrics
JIRA -- https://issues.apache.org/jira/browse/KAFKA-14114

On Thu, Aug 4, 2022 at 2:33 PM Niket Goel  wrote:

> Hey Jose,
>
> I would like to request adding KIP-859 [1] to the 3.3.0 release. The KIP
> adds some important metrics to allow visibility into KRaft log processing
> related errors.
> The KIP was approved today and I have a PR ready for review, which I hope
> should get reviewed and merged within next week.
>
> Thanks
> Niket Goel
>
> On Thu, Aug 4, 2022 at 8:58 AM José Armando García Sancio
>  wrote:
>
>> On Thu, Aug 4, 2022 at 8:37 AM Justine Olshan
>>  wrote:
>> >
>> > Hey Jose.
>> > I found a gap in handling ISR changes in ZK mode. We just need to
>> prevent
>> > brokers that are offline from being added to ISR. Since KIP-841 is part
>> of
>> > this release and the fix should be small (a few lines), I propose adding
>> > https://issues.apache.org/jira/browse/KAFKA-14140 to the 3.3 release.
>> > I'm hoping to have the PR reviewed and completed next week.
>>
>> I think we should include this fix in 3.3.0 for ZK mode. We
>> implemented this fix for KRaft mode and it will make Apache Kafka
>> safer when handling broker shutdowns.
>>
>> Thanks for volunteering to fix this.
>> --
>> -José
>>
>
>
> --
> - Niket
>


-- 
- Niket


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread Niket Goel
Hey Jose,

I would like to request adding KIP-859 [1] to the 3.3.0 release. The KIP
adds some important metrics to allow visibility into KRaft log processing
related errors.
The KIP was approved today and I have a PR ready for review, which I hope
should get reviewed and merged within next week.

Thanks
Niket Goel

On Thu, Aug 4, 2022 at 8:58 AM José Armando García Sancio
 wrote:

> On Thu, Aug 4, 2022 at 8:37 AM Justine Olshan
>  wrote:
> >
> > Hey Jose.
> > I found a gap in handling ISR changes in ZK mode. We just need to prevent
> > brokers that are offline from being added to ISR. Since KIP-841 is part
> of
> > this release and the fix should be small (a few lines), I propose adding
> > https://issues.apache.org/jira/browse/KAFKA-14140 to the 3.3 release.
> > I'm hoping to have the PR reviewed and completed next week.
>
> I think we should include this fix in 3.3.0 for ZK mode. We
> implemented this fix for KRaft mode and it will make Apache Kafka
> safer when handling broker shutdowns.
>
> Thanks for volunteering to fix this.
> --
> -José
>


-- 
- Niket


Re: [VOTE] KIP-859: Add Metadata Log Processing Error Related Metrics

2022-08-04 Thread Niket Goel
Thanks everyone for the feedback and votes. I have three +1s (David, Colin,
Jose).
Closing this vote now.

On Thu, Aug 4, 2022 at 2:09 PM José Armando García Sancio
 wrote:

> Thanks for the improvement. LGTM. +1 (binding).
>
> --
> -José
>

- Niket


Re: [VOTE] KIP-859: Add Metadata Log Processing Error Related Metrics

2022-08-04 Thread José Armando García Sancio
Thanks for the improvement. LGTM. +1 (binding).

-- 
-José


Re: [DISCISS] KIP-860: Add client-provided option to guard against unintentional replication factor change during partition reassignments

2022-08-04 Thread Vikas Singh
Thanks Stanislav for the KIP. Seems like a reasonable proposal,
preventing users from accidentally altering the replica set under certain
conditions. I have couple of comments:


> In the case of an already-reassigning partition being reassigned again,
the validation compares the targetReplicaSet size of the reassignment to
the targetReplicaSet size of the new reassignment and throws if those
differ.
Can you add more detail to this, or clarify what is targetReplicaSet (for
e.g. why not sourceReplicaSet?) and how the target replica set will be
calculated?

And what about the reassign partitions CLI? Do we want to expose the option
there too?

Cheers,
Vikas

On Thu, Jul 28, 2022 at 1:59 AM Stanislav Kozlovski 
wrote:

> Hey all,
>
> I'd like to start a discussion on a proposal to help API users from
> inadvertently increasing the replication factor of a topic through
> the alter partition reassignments API. The KIP describes two fairly
> easy-to-hit race conditions in which this can happen.
>
> The KIP itself is pretty simple, yet has a couple of alternatives that can
> help solve the same problem. I would appreciate thoughts from the community
> on how you think we should proceed, and whether the proposal makes sense in
> the first place.
>
> Thanks!
>
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-860%3A+Add+client-provided+option+to+guard+against+replication+factor+change+during+partition+reassignments
> JIRA: https://issues.apache.org/jira/browse/KAFKA-14121
>
> --
> Best,
> Stanislav
>


[jira] [Resolved] (KAFKA-14115) Password configs are logged in plaintext in KRaft

2022-08-04 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur resolved KAFKA-14115.
--
  Assignee: David Arthur  (was: Prem Kamal)
Resolution: Fixed

> Password configs are logged in plaintext in KRaft
> -
>
> Key: KAFKA-14115
> URL: https://issues.apache.org/jira/browse/KAFKA-14115
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Critical
> Fix For: 3.3.0, 3.4.0, 3.2.2
>
>
> While investigating KAFKA-14111, I also noticed that 
> ConfigurationControlManager is logging sensitive configs in plaintext at INFO 
> level.
> {code}
> [2022-07-27 12:14:09,927] INFO [Controller 1] ConfigResource(type=BROKER, 
> name='1'): set configuration listener.name.external.ssl.key.password to bar 
> (org.apache.kafka.controller.ConfigurationControlManager)
> {code}
> Once this new config reaches the broker, it is logged again, but this time it 
> is redacted
> {code}
> [2022-07-27 12:14:09,957] INFO [BrokerMetadataPublisher id=1] Updating broker 
> 1 with new configuration : listener.name.external.ssl.key.password -> 
> [hidden] (kafka.server.metadata.BrokerMetadataPublisher)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14136) AlterConfigs in KRaft does not generate records for unchanged values

2022-08-04 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur resolved KAFKA-14136.
--
Resolution: Fixed

> AlterConfigs in KRaft does not generate records for unchanged values
> 
>
> Key: KAFKA-14136
> URL: https://issues.apache.org/jira/browse/KAFKA-14136
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Major
> Fix For: 3.3.0, 3.4.0, 3.2.2
>
>
> In ZK, when handling LegacyAlterConfigs or IncrementalAlterConfigs, we call 
> certain code paths regardless of what values are included in the request. We 
> utilize this behavior to force a broker to reload a keystore or truststore 
> from disk (we sent an AlterConfig with the keystore path unchanged).
> In KRaft, however, we have an optimization to only generate ConfigRecords if 
> the incoming AtlerConfig request will result in actual config changes. This 
> means the broker never receives any records for "no-op" config changes and we 
> cannot trigger certain code paths. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Last sprint to finish line: Replace EasyMock/Powermock with Mockito

2022-08-04 Thread Matthew Benedict de Detrich
I will assign myself to 14132 and 14133, thanks for the detailed notes on 
gotchas for the migration.

Regards

--
Matthew de Detrich
Aiven Deutschland GmbH
Immanuelkirchstraße 26, 10405 Berlin
Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
m: +491603708037
w: aiven.io e: matthew.dedetr...@aiven.io
On 4. Aug 2022, 19:27 +0200, dev@kafka.apache.org, wrote:
>
> https://github.com/apache/kafka/pull/12465


Re: [VOTE] KIP-859: Add Metadata Log Processing Error Related Metrics

2022-08-04 Thread Niket Goel
Hey Jose,

> How about the inactive controller? Are inactive controllers going to
update this metric when they encounter an error when replaying a
record?


Yes, this metric will be reported for both active and inactive controllers.
The Inactive controllers will update this metric when they encounter any
error replaying the records.

- Niket

On Thu, Aug 4, 2022 at 11:30 AM José Armando García Sancio
 wrote:

> Thanks for the KIP Niket.
>
> > kafka.controller:type=KafkaController,name=MetadataErrorCountReports the
> number of times this controller node has renounced leadership of the
> metadata quorum owing to an error encountered during event processing
>
> How about the inactive controller? Are inactive controllers going to
> update this metric when they encounter an error when replaying a
> record?
>
> Thanks!
> José
>


-- 
- Niket


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #23

2022-08-04 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 569202 lines...]
[2022-08-04T18:33:48.221Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-08-04T18:33:48.221Z] 
[2022-08-04T18:33:48.221Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-08-04T18:33:49.258Z] 
[2022-08-04T18:33:49.258Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-08-04T18:33:53.669Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-08-04T18:33:53.669Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2022-08-04T18:33:53.669Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-08-04T18:33:53.669Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2022-08-04T18:33:53.669Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-08-04T18:33:53.669Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-08-04T18:34:01.883Z] 
[2022-08-04T18:34:01.883Z] BUILD SUCCESSFUL in 2h 35m 59s
[2022-08-04T18:34:01.883Z] 212 actionable tasks: 115 executed, 97 up-to-date
[2022-08-04T18:34:01.883Z] 
[2022-08-04T18:34:01.883Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_3.3/build/reports/profile/profile-2022-08-04-15-58-08.html
[2022-08-04T18:34:01.883Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] junit
[2022-08-04T18:34:02.919Z] Recording test results
[2022-08-04T18:34:25.612Z] [Checks API] No suitable checks publisher found.
[Pipeline] echo
[2022-08-04T18:34:25.614Z] Skipping Kafka Streams archetype test for Java 17
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2022-08-04T18:34:37.279Z] 
[2022-08-04T18:34:37.279Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[at_least_once] PASSED
[2022-08-04T18:34:37.279Z] 
[2022-08-04T18:34:37.279Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once] STARTED
[2022-08-04T18:35:38.540Z] 
[2022-08-04T18:35:38.540Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once] PASSED
[2022-08-04T18:35:38.540Z] 
[2022-08-04T18:35:38.540Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once_v2] STARTED
[2022-08-04T18:36:27.813Z] 
[2022-08-04T18:36:27.814Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once_v2] PASSED
[2022-08-04T18:36:27.814Z] 
[2022-08-04T18:36:27.814Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] STARTED
[2022-08-04T18:36:31.522Z] 
[2022-08-04T18:36:31.522Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] PASSED
[2022-08-04T18:36:31.522Z] 
[2022-08-04T18:36:31.522Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED
[2022-08-04T18:36:38.580Z] 
[2022-08-04T18:36:38.580Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] PASSED
[2022-08-04T18:36:38.580Z] 
[2022-08-04T18:36:38.580Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED
[2022-08-04T18:36:44.424Z] 
[2022-08-04T18:36:44.424Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] PASSED
[2022-08-04T18:36:44.424Z] 
[2022-08-04T18:36:44.424Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED
[2022-08-04T18:36:51.502Z] 
[2022-08-04T18:36:51.502Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] PASSED
[2022-08-04T18:36:51.502Z] 
[2022-08-04T18:36:51.502Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED
[2022-08-04T18:36:58.561Z] 
[2022-08-04T18:36:58.561Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED
[2022-08-04T18:36:58.561Z] 
[2022-08-04T18:36:58.561Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED
[2022-08-04T18:37:05.835Z] 
[2022-08-04T18:37:05.835Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

Re: [VOTE] KIP-859: Add Metadata Log Processing Error Related Metrics

2022-08-04 Thread José Armando García Sancio
Thanks for the KIP Niket.

> kafka.controller:type=KafkaController,name=MetadataErrorCountReports the 
> number of times this controller node has renounced leadership of the metadata 
> quorum owing to an error encountered during event processing

How about the inactive controller? Are inactive controllers going to
update this metric when they encounter an error when replaying a
record?

Thanks!
José


Re: Last sprint to finish line: Replace EasyMock/Powermock with Mockito

2022-08-04 Thread Divij Vaidya
Hi everyone

To provide you with quick updates on the progress.

Open PRs (pending review):

   1. Streams - https://github.com/apache/kafka/pull/12449
   2. Streams - https://github.com/apache/kafka/pull/12465
   3. Streams - https://github.com/apache/kafka/pull/12459
   4. Connect - https://github.com/apache/kafka/pull/12484
   5. Connect - https://github.com/apache/kafka/pull/12473
   6. Connect - https://github.com/apache/kafka/pull/12409
   7. Connect - https://github.com/apache/kafka/pull/12472


Open tasks (pending an owners):

   1. https://issues.apache.org/jira/browse/KAFKA-14132 (need owners for
   separate individual tests)
   2. https://issues.apache.org/jira/browse/KAFKA-14133


General guidance to reduce code review churn when working on these test
conversions:

   1. Please use @RunWith(MockitoJUnitRunner.StrictStubs.class) since it
   provides many benefits.
   2. Please do not perform JUnit 5 migration in the same PR as Mockito
   conversion to keep the changes few and easy to review. We will follow up
   with a blanket JUnit5 conversion (similar to this
   ) when Mockito migration is
   complete.
   3. Please use @Mock annotation to mock (Chris Egerton has added this
   comment on various PRs, hence calling it out)
   4. Note that @RunWith(MockitoJUnitRunner.StrictStubs.class) verifies the
   invocation of declared stubs automatically. If the stubs are not invoked,
   the test throws a UnnecessaryStubbingException. Note that this doesn't seem
   to work for `mockStatic` and I would suggest to explicitly verify stub
   invocations over there.
   5. As a reference, you can use the merged PR from Chris Egerton here:
   https://github.com/apache/kafka/pull/12409
   6. Add a verification step in the description that the test has
   successfully run with the command `./gradlew connect:runtime:unitTest` (or
   equivalent for the module you are changing the test for). Additionally, you
   can add the code coverage report using `./gradlew streams:reportCoverage
   -PenableTestCoverage=true -Dorg.gradle.parallel=false` to verify that no
   test assertion has been accidentally removed during the change.


*Chris*, would you like to add anything else to the general guidance above
which would help reduce the code review churn?

--
Divij Vaidya



On Mon, Aug 1, 2022 at 6:49 PM Divij Vaidya  wrote:

> Hi folks
>
> We have been trying to replace EasyMock/Powermock with Mockito
>  for quite a while.
> This adds complications for migrating to JDK 17 & Junit5. Significant
> contributions have been made by various folks towards this goal and the
> finish line is almost in sight.
>
> Let's join forces this week and get the task done!
>
> I and Christo(cc'ed) will be spending time converting the straggler tests
> during this week.
>
> At this stage, we are missing a shepherd to help us wrap up this task. *Could
> we please solicit some code review bandwidth from a committer for this week
> to help us reach the finish line?*
>
> Current pending PR requests:
> 1. https://github.com/apache/kafka/pull/12459
> 2. https://github.com/apache/kafka/pull/12465
> 3. https://github.com/apache/kafka/pull/12418
>
> Regards,
> Divij Vaidya
>
>


[jira] [Resolved] (KAFKA-13313) In KRaft mode, CreateTopic should return the topic configs in the response

2022-08-04 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-13313.
--
Resolution: Duplicate

> In KRaft mode, CreateTopic should return the topic configs in the response
> --
>
> Key: KAFKA-13313
> URL: https://issues.apache.org/jira/browse/KAFKA-13313
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.0.0
>Reporter: Jun Rao
>Priority: Major
>
> ReplicationControlManager.createTopic() doesn't seem to populate the configs 
> in CreatableTopicResult. ZkAdminManager.createTopics() does that.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: ARM/PowerPC builds

2022-08-04 Thread Colin McCabe
Hi Matthew,

Can you open a JIRA for the test failures you have seen on M1?

By the way, I have an M1 myself.

best,
Colin

On Thu, Aug 4, 2022, at 04:12, Matthew Benedict de Detrich wrote:
> Quite happy to have this change gone through since the ARM builds were 
> constantly failing however I iterate what Divij Vaidya is saying. I 
> just recently got a new MacBook M1 laptop that has ARM architecture and 
> even locally the tests fail (these are the same tests that also failed 
> in Jenkins).
>
> Should get to the root of the issue especially as more people will get 
> newer Apple laptops over time.
>
> --
> Matthew de Detrich
> Aiven Deutschland GmbH
> Immanuelkirchstraße 26, 10405 Berlin
> Amtsgericht Charlottenburg, HRB 209739 B
>
> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> m: +491603708037
> w: aiven.io e: matthew.dedetr...@aiven.io
> On 4. Aug 2022, 12:36 +0200, Divij Vaidya , wrote:
>> Thank you. This would greatly improve the PR experience since now, there is
>> higher probability for it to be green.
>>
>> Side question though, do we know why ARM tests are timing out? Should we
>> start a JIRA with Apache Infra to root cause?
>>
>> —
>> Divij Vaidya
>>
>>
>>
>> On Thu, Aug 4, 2022 at 12:42 AM Colin McCabe  wrote:
>>
>> > Just a quick note. Today we committed
>> > https://github.com/apache/kafka/pull/12380 , "MINOR: Remove ARM/PowerPC
>> > builds from Jenkinsfile #12380". This PR removes the ARM and PowerPC builds
>> > from the Jenkinsfile.
>> >
>> > The rationale is that these builds seem to be failing all the time, and
>> > this is very disruptive. I personally didn't see any successes in the last
>> > week or two. So I think we need to rethink this integration a bit.
>> >
>> > I'd suggest that we run these builds as nightly builds rather than on each
>> > commit. It's going to be rare that we make a change that succeeds on x86
>> > but breaks on PowerPC or ARM. This would let us have very long timeouts on
>> > our ARM and PowerPC builds (they could take all night if necessary), hence
>> > avoiding this issue.
>> >
>> > best,
>> > Colin
>> >
>> --
>> Divij Vaidya


Re: 3.3 release date?

2022-08-04 Thread Gregory M. Foreman
I appreciate the update and references José.

> On Aug 4, 2022, at 12:06 PM, José Armando García Sancio 
>  wrote:
> 
> On Thu, Aug 4, 2022 at 6:42 AM Gregory M. Foreman
>  wrote:
>> indicates that 3.3 would be the production-ready version of KRaft and 
>> available this month.  Is this month still a valid target?
> 
> Hi Greg,
> 
> Thanks for your interest in KRaft and Apache Kafka 3.3.0. You can
> follow the 3.3.0 release wiki page[1] and 3.3.0 discussion thread[2]
> for the latest information on the release.
> 
> We still have a good number of blocker issues for the 3.3.0 release.
> It is my goal to have the first RC this month. I encourage them to
> test and validate this RC.
> 
> [1] https://cwiki.apache.org/confluence/x/-xahD
> [2] https://lists.apache.org/thread/cmol5bcf011s1xl91rt4ylb1dgz2vb1r
> 
> -- 
> -José



Re: 3.3 release date?

2022-08-04 Thread José Armando García Sancio
On Thu, Aug 4, 2022 at 6:42 AM Gregory M. Foreman
 wrote:
> indicates that 3.3 would be the production-ready version of KRaft and 
> available this month.  Is this month still a valid target?

Hi Greg,

Thanks for your interest in KRaft and Apache Kafka 3.3.0. You can
follow the 3.3.0 release wiki page[1] and 3.3.0 discussion thread[2]
for the latest information on the release.

We still have a good number of blocker issues for the 3.3.0 release.
It is my goal to have the first RC this month. I encourage them to
test and validate this RC.

[1] https://cwiki.apache.org/confluence/x/-xahD
[2] https://lists.apache.org/thread/cmol5bcf011s1xl91rt4ylb1dgz2vb1r

-- 
-José


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Thu, Aug 4, 2022 at 8:37 AM Justine Olshan
 wrote:
>
> Hey Jose.
> I found a gap in handling ISR changes in ZK mode. We just need to prevent
> brokers that are offline from being added to ISR. Since KIP-841 is part of
> this release and the fix should be small (a few lines), I propose adding
> https://issues.apache.org/jira/browse/KAFKA-14140 to the 3.3 release.
> I'm hoping to have the PR reviewed and completed next week.

I think we should include this fix in 3.3.0 for ZK mode. We
implemented this fix for KRaft mode and it will make Apache Kafka
safer when handling broker shutdowns.

Thanks for volunteering to fix this.
-- 
-José


[jira] [Resolved] (KAFKA-6080) Transactional EoS for source connectors

2022-08-04 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-6080.
--
Fix Version/s: 3.3.0
   Resolution: Fixed

> Transactional EoS for source connectors
> ---
>
> Key: KAFKA-6080
> URL: https://issues.apache.org/jira/browse/KAFKA-6080
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Antony Stubbs
>Assignee: Chris Egerton
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.3.0
>
>
> Exactly once (eos) message production for source connectors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10000) Atomic commit of source connector records and offsets

2022-08-04 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-1.
---
Resolution: Done

> Atomic commit of source connector records and offsets
> -
>
> Key: KAFKA-1
> URL: https://issues.apache.org/jira/browse/KAFKA-1
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 3.3.0
>
>
> It'd be nice to be able to configure source connectors such that their 
> offsets are committed if and only if all records up to that point have been 
> ack'd by the producer. This would go a long way towards EOS for source 
> connectors.
>  
> This differs from https://issues.apache.org/jira/browse/KAFKA-6079, which is 
> marked as {{WONTFIX}} since it only concerns enabling the idempotent producer 
> for source connectors and is not concerned with source connector offsets.
> This also differs from https://issues.apache.org/jira/browse/KAFKA-6080, 
> which had a lot of discussion around allowing connector-defined transaction 
> boundaries. The suggestion in this ticket is to only use source connector 
> offset commits as the transaction boundaries for connectors; allowing 
> connector-specified transaction boundaries can be addressed separately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14143) Exactly-once source system tests

2022-08-04 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-14143:
-

 Summary: Exactly-once source system tests
 Key: KAFKA-14143
 URL: https://issues.apache.org/jira/browse/KAFKA-14143
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Reporter: Chris Egerton
Assignee: Chris Egerton


System tests for the exactly-once source connector support introduced in 
[KIP-618|https://cwiki.apache.org/confluence/display/KAFKA/KIP-618%3A+Exactly-Once+Support+for+Source+Connectors]
 / KAFKA-1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14142) Improve information returned about the cluster metadata partition

2022-08-04 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-14142:
--

 Summary: Improve information returned about the cluster metadata 
partition
 Key: KAFKA-14142
 URL: https://issues.apache.org/jira/browse/KAFKA-14142
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Reporter: Jose Armando Garcia Sancio
Assignee: Jason Gustafson
 Fix For: 3.3.0


The Apacke Kafka operator needs to know when it is safe to format and start a 
KRaft Controller that had a disk failure of the metadata log dir.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread Chris Egerton
> On Wed, Jul 13, 2022 at 10:01 AM Sagar  wrote:
> > Well actually I have 2 approved PRs from Kafka Connect:
> >
> > https://github.com/apache/kafka/pull/12321
> > https://github.com/apache/kafka/pull/12309
> >
> > Not sure how to get these merged though but I think these can go into
3.3
> > release.
>
> Thank you for the fixes. What do you think Chris Egerton since you
> reviewed them and merged them into trunk?

I think these are useful contributions but probably not worth backporting
at this stage. They do not address regressions and they do not impact the
stability of the release. I am looking forward to having
https://github.com/apache/kafka/pull/12309 in 3.4/4.0, though!

> On Thu, Jul 28, 2022 at 2:16 PM Chris Egerton 
wrote:
> > Would it be okay to backport https://github.com/apache/kafka/pull/12451
to the current 3.3 branch? It's a strictly cosmetic change that updates a
misleading comment about exactly-once support for source connectors. I'm
hoping it'll make life easier for anyone who has to debug this feature by
saving some confusion.
>
> Yes. Feel free to cherry pick it to the 3.3.0 branch. As you said, it
> is mainly a formatting change and a comment change. It should be low
> risk.
>
> Thanks!

Thanks José! Will get on that now.

Cheers,

Chris

On Thu, Aug 4, 2022 at 11:32 AM José Armando García Sancio
 wrote:

> On Mon, Aug 1, 2022 at 1:51 AM Matthew Benedict de Detrich
>  wrote:
> > Due to time pressure from the release schedule does it make sense to
> merge the PR as is since it already has the necessary approval from Luke or
> should we wait for finale reviews from Mickael/Tom as well?
>
> Thanks for the update Matthew. You can try tagging them with their
> GitHub alias in the PR.
>
> --
> -José
>


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Thu, Jul 28, 2022 at 2:16 PM Chris Egerton  wrote:
> Would it be okay to backport https://github.com/apache/kafka/pull/12451 to 
> the current 3.3 branch? It's a strictly cosmetic change that updates a 
> misleading comment about exactly-once support for source connectors. I'm 
> hoping it'll make life easier for anyone who has to debug this feature by 
> saving some confusion.

Yes. Feel free to cherry pick it to the 3.3.0 branch. As you said, it
is mainly a formatting change and a comment change. It should be low
risk.

Thanks!
-- 
-José


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread Justine Olshan
Hey Jose.
I found a gap in handling ISR changes in ZK mode. We just need to prevent
brokers that are offline from being added to ISR. Since KIP-841 is part of
this release and the fix should be small (a few lines), I propose adding
https://issues.apache.org/jira/browse/KAFKA-14140 to the 3.3 release.
I'm hoping to have the PR reviewed and completed next week.

Let me know what you think.
Thanks,
Justine

On Thu, Aug 4, 2022 at 8:30 AM José Armando García Sancio
 wrote:

> On Thu, Jul 28, 2022 at 2:16 PM Chris Egerton 
> wrote:
> > Would it be okay to backport https://github.com/apache/kafka/pull/12451
> to the current 3.3 branch? It's a strictly cosmetic change that updates a
> misleading comment about exactly-once support for source connectors. I'm
> hoping it'll make life easier for anyone who has to debug this feature by
> saving some confusion.
>
> Yes. Feel free to cherry pick it to the 3.3.0 branch. As you said, it
> is mainly a formatting change and a comment change. It should be low
> risk.
>
> Thanks!
> --
> -José
>


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Mon, Aug 1, 2022 at 1:51 AM Matthew Benedict de Detrich
 wrote:
> Due to time pressure from the release schedule does it make sense to merge 
> the PR as is since it already has the necessary approval from Luke or should 
> we wait for finale reviews from Mickael/Tom as well?

Thanks for the update Matthew. You can try tagging them with their
GitHub alias in the PR.

-- 
-José


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Thu, Jul 21, 2022 at 8:10 PM Luke Chen  wrote:
> I just found the KIP-831 is not listed in the v3.3 planned KIPs.
> It is completed and merged.
> Please help add it.

Thanks. It should now be in the 3.3.0 release page.

-- 
-José


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Thu, Jul 21, 2022 at 1:56 PM Randall Hauch  wrote:
> Will you approve me merging the fix to the `3.3` branch for inclusion in 
> 3.3.0?

Yes. I approved this fix for 3.3.0. Randall merged it to the 3.3.0 branch.

-- 
-José


[GitHub] [kafka-site] bbejeck commented on pull request #433: MINOR: Add placeholder images that will load iframe.

2022-08-04 Thread GitBox


bbejeck commented on PR #433:
URL: https://github.com/apache/kafka-site/pull/433#issuecomment-1205402084

   @mimaison, I've addressed your comments - thanks for the review I think it's 
important to get back the original look and feel 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Thu, Jul 14, 2022 at 4:55 PM Jason Gustafson
 wrote:
>
> Hey Jose,
>
> Thanks for volunteering to manage the release! KIP-833 is currently slotted
> for 3.3. We've been getting some help from Jack Vanlighty to validate the
> raft implementation in TLA+ and with frameworks like Jepsen. The
> specification is written here if anyone is interested:
> https://github.com/Vanlightly/raft-tlaplus/blob/main/specifications/pull-raft/KRaft.tla.
> The main gap that this work uncovered in our implementation is documented
> here: https://issues.apache.org/jira/browse/KAFKA-14077. I do believe that
> KIP-833 depends on fixing this issue, so I wanted to see how you feel about
> giving us a little more time to address it?


Thanks Jason.

Ismael, Jason, Colin and I discussed this offline. We don't think this
should be a blocker for 3.3.0. "KIP-853: KRaft Voter Changes" is my
proposal to fix this. If the KIP is approved, we should be able to
include the fix in 3.4.0. I went ahead and replaced the fix version
with 3.4.0.

For 3.3.0 we planned to improve the kafka-metadata-quorum tool so that
it can tell the Apache Kafka administrator when it is safe to bring
back a controller with a failed disk.

-- 
-José


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
On Wed, Jul 13, 2022 at 10:01 AM Sagar  wrote:
> Well actually I have 2 approved PRs from Kafka Connect:
>
> https://github.com/apache/kafka/pull/12321
> https://github.com/apache/kafka/pull/12309
>
> Not sure how to get these merged though but I think these can go into 3.3
> release.

Thank you for the fixes. What do you think Chris Egerton since you
reviewed them and merged them into trunk?

-- 
-José


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-08-04 Thread José Armando García Sancio
Excuse the delay in the response. I was busy dealing with some
potentially blocking issues for the 3.3.0 release.

On Wed, Jul 13, 2022 at 4:33 AM Divij Vaidya  wrote:
> A few of my PRs are pending review for quite some which I was hoping to
> merge into 3.3. I have already marked them with "Fix version=3.3.0" so that
> you can track them using the JIRA filter you shared earlier
> 
> in this thread. Would you have some time to review them?
>

Thanks for your contribution. It looks like some of those issues have
Apache Kafka committers reviewing them. I can try helping if I have
time.
-- 
-José


[jira] [Created] (KAFKA-14141) Unable to abort a stale transaction

2022-08-04 Thread Yordan Pavlov (Jira)
Yordan Pavlov created KAFKA-14141:
-

 Summary: Unable to abort a stale transaction
 Key: KAFKA-14141
 URL: https://issues.apache.org/jira/browse/KAFKA-14141
 Project: Kafka
  Issue Type: Bug
Reporter: Yordan Pavlov


I am using the Kafka cli tools for trying to abort an old transction. The 
transaction looks like so:
{code:java}
/opt/kafka_2.13-3.2.0/bin/kafka-transactions.sh --bootstrap-server 
kafka-hz.stage.san:30911 list | grep eth_network_growth | grep -v Empty

eth_network_growth-0-1    2              1212324       Ongoing
{code}
 

The corresponding producer looks like so:
{code:java}
/opt/kafka_2.13-3.2.0/bin/kafka-transactions.sh --bootstrap-server 
kafka-hz.stage.san:30913 describe-producer --partition 0 --topic 
eth_network_growth
ProducerId    ProducerEpoch    LatestCoordinatorEpoch    LastSequence    
LastTimestamp    CurrentTransactionStartOffset    
1212324       220              301                       93              
1659622703834    11366          {code}
 What I am attempting is:
{code:java}
/opt/kafka_2.13-3.2.0/bin/kafka-transactions.sh --bootstrap-server 
kafka-hz.stage.san:30913 abort --topic eth_network_growth --partition 0 
--producer-id 1212324 --producer-epoch 220 --coordinator-epoch 301{code}
This command exits quietly but doesn't seem to have an effect as both the 
transaction and the corresponding producer are again reported. Any hints on 
what I should check?

The Kafka brokers are running Confluent images based on Kafka  3.1.X (finding 
the exact version turned out tricky).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-844: Transactional State Stores

2022-08-04 Thread Alexander Sorokoumov
Hey Bruno,

Thank you for the suggestions and the clarifying questions. I believe that
they cover the core of this proposal, so it is crucial for us to be on the
same page.

1. Don't you want to deprecate StateStore#flush().


Good call! I updated both the proposal and the prototype.

 2. I would shorten Materialized#withTransactionalityEnabled() to
> Materialized#withTransactionsEnabled().


Turns out, these methods are no longer necessary. I removed them from the
proposal and the prototype.


> 3. Could you also describe a bit more in detail where the offsets passed
> into commit() and recover() come from?


The offset passed into StateStore#commit is the last offset committed to
the changelog topic. The offset passed into StateStore#recover is the last
checkpointed offset for the given StateStore. Let's look at steps 3 and 4
in the commit workflow. After the TaskExecutor/TaskManager commits, it calls
StreamTask#postCommit[1] that in turn:
a. updates the changelog offsets via
ProcessorStateManager#updateChangelogOffsets[2]. The offsets here come from
the RecordCollector[3], which tracks the latest offsets the producer sent
without exception[4, 5].
b. flushes/commits the state store in AbstractTask#maybeCheckpoint[6]. This
method essentially calls ProcessorStateManager methods - flush/commit[7]
and checkpoint[8]. ProcessorStateManager#commit goes over all state stores
that belong to that task and commits them with the offset obtained in step
`a`. ProcessorStateManager#checkpoint writes down those offsets for all
state stores, except for non-transactional ones in the case of EOS.

During initialization, StreamTask calls
StateManagerUtil#registerStateStores[8] that in turn calls
ProcessorStateManager#initializeStoreOffsetsFromCheckpoint[9]. At the
moment, this method assigns checkpointed offsets to the corresponding state
stores[10]. The prototype also calls StateStore#recover with the
checkpointed offset and assigns the offset returned by recover()[11].

4. I do not quite understand how a state store can roll forward. You
> mention in the thread the following:


The 2-state-stores commit looks like this [12]:

   1. Flush the temporary state store.
   2. Create a commit marker with a changelog offset corresponding to the
   state we are committing.
   3. Go over all keys in the temporary store and write them down to the
   main one.
   4. Wipe the temporary store.
   5. Delete the commit marker.


Let's consider crash failure scenarios:

   - Crash failure happens between steps 1 and 2. The main state store is
   in a consistent state that corresponds to the previously checkpointed
   offset. StateStore#recover throws away the temporary store and proceeds
   from the last checkpointed offset.
   - Crash failure happens between steps 2 and 3. We do not know what keys
   from the temporary store were already written to the main store, so we
   can't roll back. There are two options - either wipe the main store or roll
   forward. Since the point of this proposal is to avoid situations where we
   throw away the state and we do not care to what consistent state the store
   rolls to, we roll forward by continuing from step 3.
   - Crash failure happens between steps 3 and 4. We can't distinguish
   between this and the previous scenario, so we write all the keys from the
   temporary store. This is okay because the operation is idempotent.
   - Crash failure happens between steps 4 and 5. Again, we can't
   distinguish between this and previous scenarios, but the temporary store is
   already empty. Even though we write all keys from the temporary store, this
   operation is, in fact, no-op.
   - Crash failure happens between step 5 and checkpoint. This is the case
   you referred to in question 5. The commit is finished, but it is not
   reflected at the checkpoint. recover() returns the offset of the previous
   commit here, which is incorrect, but it is okay because we will replay the
   changelog from the previously committed offset. As changelog replay is
   idempotent, the state store recovers into a consistent state.

The last crash failure scenario is a natural transition to

how should Streams know what to write into the checkpoint file
> after the crash?
>

As mentioned above, the Streams app writes the checkpoint file after the
Kafka transaction and then the StateStore commit. Same as without the
proposal, it should write the committed offset, as it is the same for both
the Kafka changelog and the state store.


> This issue arises because we store the offset outside of the state
> store. Maybe we need an additional method on the state store interface
> that returns the offset at which the state store is.


In my opinion, we should include in the interface only the guarantees that
are necessary to preserve EOS without wiping the local state. This way, we
allow more room for possible implementations. Thanks to the idempotency of
the changelog replay, it is "good enough" if StateStore#recover 

Re: ARM/PowerPC builds

2022-08-04 Thread David Arthur
Divij, I believe the ARM node is not managed by Apache but rather access to
it is donated by some external entity. I opened an INFRA ticket the last
time we had issues and the Infra folks reached out to the owner of the node
to resolve the issue.



On Thu, Aug 4, 2022 at 7:13 AM Matthew Benedict de Detrich
 wrote:

> Quite happy to have this change gone through since the ARM builds were
> constantly failing however I iterate what Divij Vaidya is saying. I just
> recently got a new MacBook M1 laptop that has ARM architecture and even
> locally the tests fail (these are the same tests that also failed in
> Jenkins).
>
> Should get to the root of the issue especially as more people will get
> newer Apple laptops over time.
>
> --
> Matthew de Detrich
> Aiven Deutschland GmbH
> Immanuelkirchstraße 26, 10405 Berlin
> Amtsgericht Charlottenburg, HRB 209739 B
>
> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> m: +491603708037
> w: aiven.io e: matthew.dedetr...@aiven.io
> On 4. Aug 2022, 12:36 +0200, Divij Vaidya ,
> wrote:
> > Thank you. This would greatly improve the PR experience since now, there
> is
> > higher probability for it to be green.
> >
> > Side question though, do we know why ARM tests are timing out? Should we
> > start a JIRA with Apache Infra to root cause?
> >
> > —
> > Divij Vaidya
> >
> >
> >
> > On Thu, Aug 4, 2022 at 12:42 AM Colin McCabe  wrote:
> >
> > > Just a quick note. Today we committed
> > > https://github.com/apache/kafka/pull/12380 , "MINOR: Remove
> ARM/PowerPC
> > > builds from Jenkinsfile #12380". This PR removes the ARM and PowerPC
> builds
> > > from the Jenkinsfile.
> > >
> > > The rationale is that these builds seem to be failing all the time, and
> > > this is very disruptive. I personally didn't see any successes in the
> last
> > > week or two. So I think we need to rethink this integration a bit.
> > >
> > > I'd suggest that we run these builds as nightly builds rather than on
> each
> > > commit. It's going to be rare that we make a change that succeeds on
> x86
> > > but breaks on PowerPC or ARM. This would let us have very long
> timeouts on
> > > our ARM and PowerPC builds (they could take all night if necessary),
> hence
> > > avoiding this issue.
> > >
> > > best,
> > > Colin
> > >
> > --
> > Divij Vaidya
>


3.3 release date?

2022-08-04 Thread Gregory M . Foreman
Hello:

I have a client considering moving to KRaft.  The page here:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-833%3A+Mark+KRaft+as+Production+Ready

indicates that 3.3 would be the production-ready version of KRaft and available 
this month.  Is this month still a valid target?

Thanks,
Greg

TopologyTestDriver and IQv2

2022-08-04 Thread Jorge Delgado
Hello, what is the approach to unit test the new Queries introduced by
Interactive Queries v2? Is the TopologyTestDriver being updated to support
it?

Regards,
Jorge


Re: ARM/PowerPC builds

2022-08-04 Thread Matthew Benedict de Detrich
Quite happy to have this change gone through since the ARM builds were 
constantly failing however I iterate what Divij Vaidya is saying. I just 
recently got a new MacBook M1 laptop that has ARM architecture and even locally 
the tests fail (these are the same tests that also failed in Jenkins).

Should get to the root of the issue especially as more people will get newer 
Apple laptops over time.

--
Matthew de Detrich
Aiven Deutschland GmbH
Immanuelkirchstraße 26, 10405 Berlin
Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
m: +491603708037
w: aiven.io e: matthew.dedetr...@aiven.io
On 4. Aug 2022, 12:36 +0200, Divij Vaidya , wrote:
> Thank you. This would greatly improve the PR experience since now, there is
> higher probability for it to be green.
>
> Side question though, do we know why ARM tests are timing out? Should we
> start a JIRA with Apache Infra to root cause?
>
> —
> Divij Vaidya
>
>
>
> On Thu, Aug 4, 2022 at 12:42 AM Colin McCabe  wrote:
>
> > Just a quick note. Today we committed
> > https://github.com/apache/kafka/pull/12380 , "MINOR: Remove ARM/PowerPC
> > builds from Jenkinsfile #12380". This PR removes the ARM and PowerPC builds
> > from the Jenkinsfile.
> >
> > The rationale is that these builds seem to be failing all the time, and
> > this is very disruptive. I personally didn't see any successes in the last
> > week or two. So I think we need to rethink this integration a bit.
> >
> > I'd suggest that we run these builds as nightly builds rather than on each
> > commit. It's going to be rare that we make a change that succeeds on x86
> > but breaks on PowerPC or ARM. This would let us have very long timeouts on
> > our ARM and PowerPC builds (they could take all night if necessary), hence
> > avoiding this issue.
> >
> > best,
> > Colin
> >
> --
> Divij Vaidya


Re: ARM/PowerPC builds

2022-08-04 Thread Divij Vaidya
Thank you. This would greatly improve the PR experience since now, there is
higher probability for it to be green.

Side question though, do we know why ARM tests are timing out? Should we
start a JIRA with Apache Infra to root cause?

—
Divij Vaidya



On Thu, Aug 4, 2022 at 12:42 AM Colin McCabe  wrote:

> Just a quick note. Today we committed
> https://github.com/apache/kafka/pull/12380 , "MINOR: Remove ARM/PowerPC
> builds from Jenkinsfile #12380". This PR removes the ARM and PowerPC builds
> from the Jenkinsfile.
>
> The rationale is that these builds seem to be failing all the time, and
> this is very disruptive. I personally didn't see any successes in the last
> week or two. So I think we need to rethink this integration a bit.
>
> I'd suggest that we run these builds as nightly builds rather than on each
> commit. It's going to be rare that we make a change that succeeds on x86
> but breaks on PowerPC or ARM. This would let us have very long timeouts on
> our ARM and PowerPC builds (they could take all night if necessary), hence
> avoiding this issue.
>
> best,
> Colin
>
-- 
Divij Vaidya


[GitHub] [kafka-site] mimaison commented on pull request #433: MINOR: Add placeholder images that will load iframe.

2022-08-04 Thread GitBox


mimaison commented on PR #433:
URL: https://github.com/apache/kafka-site/pull/433#issuecomment-1205049698

   It does not seem to render quite right for me.
   
   The first video is fine:
   https://user-images.githubusercontent.com/903615/182822374-326f590b-c5bb-4ec8-af0f-9bc2e9e473e3.png;>
   
   But then for the 3 other ones the text is on the side:
   https://user-images.githubusercontent.com/903615/182822477-900dbc54-dd52-4303-8c50-df0892a6383b.png;>
   
   
   Also should we bring back the previous layout (for example at 
a70c99b2a9b4a06473b60c38961042374b7a1c20) ? where the list of video names was 
on the right:
   https://user-images.githubusercontent.com/903615/182822710-992bc746-5562-4201-8464-d715258b0fc1.png;>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org