[jira] [Resolved] (KAFKA-13292) InvalidPidMappingException: The producer attempted to use a producer id which is not currently assigned to its transactional id

2021-09-16 Thread NEERAJ VAIDYA (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NEERAJ VAIDYA resolved KAFKA-13292.
---
Resolution: Fixed

Upgraded the client libraries to 2.8.0 while still using 2.7.0 broker.

Using the new StreamsExceptionHandler, added code to return REPLACE_THREAD in 
case of InvalidPidMappingException.

This causes the application to create a new thread and continue processing 
Event.

> InvalidPidMappingException: The producer attempted to use a producer id which 
> is not currently assigned to its transactional id
> ---
>
> Key: KAFKA-13292
> URL: https://issues.apache.org/jira/browse/KAFKA-13292
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: NEERAJ VAIDYA
>Priority: Major
>
> I have a KafkaStreams application which consumes from a topic which has 12 
> partitions. The incoming message rate into this topic is very low, perhaps 
> 3-4 per minute. Also, some partitions will not receive messages for more than 
> 7 days.
>  
> Exactly after 7 days of starting this application, I seem to be getting the 
> following exception and the application shuts down, without processing 
> anymore messages :
>  
> {code:java}
> 2021-09-10T12:21:59.636 [kafka-producer-network-thread | 
> mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer] 
> INFO  o.a.k.c.p.i.TransactionManager - MSG=[Producer 
> clientId=mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer,
>  transactionalId=mtx-caf-0_2] Transiting to abortable error state due to 
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> 2021-09-10T12:21:59.642 [kafka-producer-network-thread | 
> mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1-0_2-producer] 
> ERROR o.a.k.s.p.i.RecordCollectorImpl - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] task [0_2] 
> Error encountered sending record to topic 
> mtx-caf-DuplicateCheckStore-changelog for task 0_2 due to:
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> Exception handler choose to FAIL the processing, no more records would be 
> sent.
> 2021-09-10T12:21:59.740 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] ERROR 
> o.a.k.s.p.internals.StreamThread - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] Encountered the 
> following exception during processing and the thread is going to shut down:
> org.apache.kafka.streams.errors.StreamsException: Error encountered sending 
> record to topic mtx-caf-DuplicateCheckStore-changelog for task 0_2 due to:
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.
> Exception handler choose to FAIL the processing, no more records would be 
> sent.
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:214)
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.lambda$send$0(RecordCollectorImpl.java:186)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1363)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.abort(ProducerBatch.java:159)
>         at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortUndrainedBatches(RecordAccumulator.java:781)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:425)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: org.apache.kafka.common.errors.InvalidPidMappingException: The 
> producer attempted to use a producer id which is not currently assigned to 
> its transactional id.
> 2021-09-10T12:21:59.740 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] INFO  
> o.a.k.s.p.internals.StreamThread - MSG=stream-thread 
> [mtx-caf-53dc7e96-90f1-4ae9-8af6-236d22c88e08-StreamThread-1] State 
> transition from RUNNING to PENDING_SHUTDOWN
> {code}
>  
> After this, I can see that all 12 tasks (because there are 12 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #473

2021-09-16 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 492200 lines...]
[2021-09-17T01:45:26.520Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[4]
 STARTED
[2021-09-17T01:45:28.514Z] 
[2021-09-17T01:45:28.514Z] PlaintextConsumerTest > testAutoCommitOnRebalance() 
PASSED
[2021-09-17T01:45:28.514Z] 
[2021-09-17T01:45:28.514Z] PlaintextConsumerTest > 
testInterceptorsWithWrongKeyValue() STARTED
[2021-09-17T01:45:33.295Z] 
[2021-09-17T01:45:33.295Z] PlaintextConsumerTest > 
testInterceptorsWithWrongKeyValue() PASSED
[2021-09-17T01:45:33.295Z] 
[2021-09-17T01:45:33.295Z] PlaintextConsumerTest > 
testPerPartitionLeadWithMaxPollRecords() STARTED
[2021-09-17T01:45:37.896Z] 
[2021-09-17T01:45:37.896Z] PlaintextConsumerTest > 
testPerPartitionLeadWithMaxPollRecords() PASSED
[2021-09-17T01:45:37.896Z] 
[2021-09-17T01:45:37.896Z] PlaintextConsumerTest > testHeaders() STARTED
[2021-09-17T01:45:41.460Z] 
[2021-09-17T01:45:41.460Z] PlaintextConsumerTest > testHeaders() PASSED
[2021-09-17T01:45:41.460Z] 
[2021-09-17T01:45:41.460Z] PlaintextConsumerTest > 
testMaxPollIntervalMsDelayInAssignment() STARTED
[2021-09-17T01:45:42.982Z] 
[2021-09-17T01:45:42.982Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[4]
 PASSED
[2021-09-17T01:45:42.982Z] 
[2021-09-17T01:45:42.982Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[5]
 STARTED
[2021-09-17T01:45:47.189Z] 
[2021-09-17T01:45:47.189Z] PlaintextConsumerTest > 
testMaxPollIntervalMsDelayInAssignment() PASSED
[2021-09-17T01:45:47.189Z] 
[2021-09-17T01:45:47.189Z] PlaintextConsumerTest > 
testHeadersSerializerDeserializer() STARTED
[2021-09-17T01:45:51.784Z] 
[2021-09-17T01:45:51.785Z] PlaintextConsumerTest > 
testHeadersSerializerDeserializer() PASSED
[2021-09-17T01:45:51.785Z] 
[2021-09-17T01:45:51.785Z] PlaintextConsumerTest > 
testDeprecatedPollBlocksForAssignment() STARTED
[2021-09-17T01:45:56.380Z] 
[2021-09-17T01:45:56.380Z] PlaintextConsumerTest > 
testDeprecatedPollBlocksForAssignment() PASSED
[2021-09-17T01:45:56.380Z] 
[2021-09-17T01:45:56.380Z] PlaintextConsumerTest > 
testPartitionPauseAndResume() STARTED
[2021-09-17T01:45:56.380Z] 
[2021-09-17T01:45:56.380Z] UserQuotaTest > testThrottledProducerConsumer() 
PASSED
[2021-09-17T01:45:56.380Z] 
[2021-09-17T01:45:56.380Z] UserQuotaTest > testQuotaOverrideDelete() STARTED
[2021-09-17T01:46:02.019Z] 
[2021-09-17T01:46:02.019Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[5]
 PASSED
[2021-09-17T01:46:02.019Z] 
[2021-09-17T01:46:02.019Z] LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleanerWithMessageFormatV0(CompressionType)[1]
 STARTED
[2021-09-17T01:46:02.108Z] 
[2021-09-17T01:46:02.108Z] PlaintextConsumerTest > 
testPartitionPauseAndResume() PASSED
[2021-09-17T01:46:02.108Z] 
[2021-09-17T01:46:02.108Z] PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured() STARTED
[2021-09-17T01:46:06.705Z] 
[2021-09-17T01:46:06.705Z] PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured() PASSED
[2021-09-17T01:46:06.705Z] 
[2021-09-17T01:46:06.705Z] PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe() STARTED
[2021-09-17T01:46:12.435Z] 
[2021-09-17T01:46:12.435Z] PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe() PASSED
[2021-09-17T01:46:12.435Z] 
[2021-09-17T01:46:12.435Z] PlaintextConsumerTest > 
testConsumeMessagesWithLogAppendTime() STARTED
[2021-09-17T01:46:15.938Z] 
[2021-09-17T01:46:15.938Z] LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleanerWithMessageFormatV0(CompressionType)[1]
 PASSED
[2021-09-17T01:46:15.938Z] 
[2021-09-17T01:46:15.938Z] LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleanerWithMessageFormatV0(CompressionType)[2]
 STARTED
[2021-09-17T01:46:17.027Z] 
[2021-09-17T01:46:17.027Z] PlaintextConsumerTest > 
testConsumeMessagesWithLogAppendTime() PASSED
[2021-09-17T01:46:17.027Z] 
[2021-09-17T01:46:17.027Z] PlaintextConsumerTest > 
testPerPartitionLagMetricsWhenReadCommitted() STARTED
[2021-09-17T01:46:21.620Z] 
[2021-09-17T01:46:21.620Z] PlaintextConsumerTest > 
testPerPartitionLagMetricsWhenReadCommitted() PASSED
[2021-09-17T01:46:21.620Z] 
[2021-09-17T01:46:21.620Z] PlaintextConsumerTest > 

[jira] [Resolved] (KAFKA-13216) Streams left/outer joins cause new internal changelog topic to grow unbounded

2021-09-16 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-13216.
---
Resolution: Fixed

> Streams left/outer joins cause new internal changelog topic to grow unbounded
> -
>
> Key: KAFKA-13216
> URL: https://issues.apache.org/jira/browse/KAFKA-13216
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sergio Peña
>Assignee: Guozhang Wang
>Priority: Critical
> Fix For: 3.1.0
>
>
> This bug is caused by the improvements made in 
> https://issues.apache.org/jira/browse/KAFKA-10847, which fixes an issue with 
> stream-stream left/outer joins. The issue is only caused when a stream-stream 
> left/outer join is used with the new `JoinWindows.ofTimeDifferenceAndGrace()` 
> API that specifies the window time + grace period. This new API was added in 
> AK 3.0. No previous users are affected.
> The issue causes that the internal changelog topic used by the new 
> OUTERSHARED window store keeps growing unbounded as new records come. The 
> topic is never cleaned up nor compacted even if tombstones are written to 
> delete the joined and/or expired records from the window store. The problem 
> is caused by a parameter required in the window store to retain duplicates. 
> This config causes that tombstones records have a new sequence ID as part of 
> the key ID in the changelog making those keys unique. Thus causing the 
> cleanup policy not working.
> In 3.0, we deprecated {{JoinWindows.of(size)}} in favor of 
> {{JoinWindows.ofTimeDifferenceAndGrace()}} -- the old API uses the old 
> semantics and is thus not affected while the new API enable the new 
> semantics; the problem is that we deprecated the old API and thus tell users 
> that they should switch to the new broken API.
> We have two ways forward:
>  * Fix the bug (non trivial)
>  * Un-deprecate the old {{JoinWindow.of(size)}} API (and tell users not to 
> use the new but broken API)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Become a Contributor

2021-09-16 Thread Luiz Fernando Fonseca
Hello. I would like to become a contributor in JIRA.

Username: luizfrf

Best Regards,


[jira] [Created] (KAFKA-13307) NPE in SyncGroup after an “empty” JoinGroup

2021-09-16 Thread Jean-Baptiste Mazon (Jira)
Jean-Baptiste Mazon created KAFKA-13307:
---

 Summary: NPE in SyncGroup after an “empty” JoinGroup
 Key: KAFKA-13307
 URL: https://issues.apache.org/jira/browse/KAFKA-13307
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.8.0
 Environment: This is kafka_2.13-2.8.0, tarball from the website, on a 
single-host 3-node Linux 5.13.13 setup.
Reporter: Jean-Baptiste Mazon
 Attachments: join_then_sync.raw

Got this while socket-hacking on my raw consumer (binary messages attached, but 
may not be that directly usable as there's a generated UUID involved).

 

Summary:
 # → JoinGroup Request v0 (groupId = "test-consumer", sessionTimeoutMs = 1, 
memberId = empty string, protocolType = "consumer", protocols = empty array)
 # ← JoinGroup Response v0 (errorCode = 0 (None), generationId = 7, 
protocolName = empty string, leader = 
"null-cd85a8fc-b376-44aa-a4b0-641d01466d10", memberId =  
"null-cd85a8fc-b376-44aa-a4b0-641d01466d10", members = [\{memberId = 
"null-cd85a8fc-b376-44aa-a4b0-641d01466d10", metadata = empty bytes}])
 # → SyncGroup Request v0  (groupId = "test-consumer", generationId = echo, 
memberId = echo, assignments = empty bytes)
 # ← SyncGroup Response v0 (errorCode = -1 (*UnknownServerError, with a NPE*), 
memberAssignment = empty bytes)

 

Server-side log:
{noformat}
[2021-09-16 21:46:15,979] ERROR [KafkaApi-0] Error when handling request: 
clientId=null, correlationId=67305986, api=SYNC_GROUP, version=0, 
body=SyncGroupRequestData(groupId='test-consumer', generationId=6, 
memberId='null-0e29c037-55b7-413a-9822-8b9d87b49a89', groupInstanceId=null, 
protocolType=null, protocolName=null, assignments=[]) 
(kafka.server.RequestHandlerHelper)
java.lang.NullPointerException: Cannot invoke 
"String.getBytes(java.nio.charset.Charset)" because "this.clientId" is null
 at 
kafka.internals.generated.GroupMetadataValue$MemberMetadata.addSize(GroupMetadataValue.java:643)
 at 
kafka.internals.generated.GroupMetadataValue.addSize(GroupMetadataValue.java:261)
 at org.apache.kafka.common.protocol.Message.size(Message.java:51)
 at 
org.apache.kafka.common.protocol.MessageUtil.toVersionPrefixedByteBuffer(MessageUtil.java:201)
 at 
org.apache.kafka.common.protocol.MessageUtil.toVersionPrefixedBytes(MessageUtil.java:210)
 at 
kafka.coordinator.group.GroupMetadataManager$.groupMetadataValue(GroupMetadataManager.scala:1065)
 at 
kafka.coordinator.group.GroupMetadataManager.storeGroup(GroupMetadataManager.scala:250)
 at 
kafka.coordinator.group.GroupCoordinator.$anonfun$doSyncGroup$1(GroupCoordinator.scala:421)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
 at kafka.coordinator.group.GroupMetadata.inLock(GroupMetadata.scala:227)
 at 
kafka.coordinator.group.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:381)
 at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:1555)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:181)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:74)
 at java.base/java.lang.Thread.run(Thread.java:831){noformat}
 

A similar one can be triggered with DescribeGroups by launching 
{{kafka-consumer-groups.sh --describe --group test-consumer}} shortly after:
{noformat}
[2021-09-16 21:47:49,911] ERROR [KafkaApi-0] Error when handling request: 
clientId=adminclient-1, correlationId=4, api=DESCRIBE_GROUPS, version=5, 
body=DescribeGroupsRequestData(groups=[test-consumer], 
includeAuthorizedOperations=false) (kafka.server.RequestHandlerHelper)
java.lang.NullPointerException: Cannot invoke 
"String.getBytes(java.nio.charset.Charset)" because "this.clientId" is null
 at 
org.apache.kafka.common.message.DescribeGroupsResponseData$DescribedGroupMember.addSize(DescribeGroupsResponseData.java:1116)
 at 
org.apache.kafka.common.message.DescribeGroupsResponseData$DescribedGroup.addSize(DescribeGroupsResponseData.java:640)
 at 
org.apache.kafka.common.message.DescribeGroupsResponseData.addSize(DescribeGroupsResponseData.java:212)
 at org.apache.kafka.common.protocol.SendBuilder.buildSend(SendBuilder.java:218)
 at 
org.apache.kafka.common.protocol.SendBuilder.buildResponseSend(SendBuilder.java:200)
 at 
org.apache.kafka.common.requests.AbstractResponse.toSend(AbstractResponse.java:43)
 at 
org.apache.kafka.common.requests.RequestContext.buildResponseSend(RequestContext.java:111)
 at 
kafka.network.RequestChannel$Request.buildResponseSend(RequestChannel.scala:132)
 at 
kafka.server.RequestHandlerHelper.sendResponse(RequestHandlerHelper.scala:185)
 at 
kafka.server.RequestHandlerHelper.sendResponseMaybeThrottle(RequestHandlerHelper.scala:101)
 at kafka.server.KafkaApis.sendResponseCallback$3(KafkaApis.scala:1378)
 at kafka.server.KafkaApis.handleDescribeGroupRequest(KafkaApis.scala:1417)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:182)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:74)

Re: [DISCUSS] Apache Kafka 3.1.0 release

2021-09-16 Thread Bill Bejeck
Thanks for volunteering for the 3.1.0 release David!

It's a +1 from me.

-Bill

On Thu, Sep 16, 2021 at 3:08 PM Konstantine Karantasis <
kkaranta...@apache.org> wrote:

> Thanks for volunteering to run 3.1.0 David!
>
> +1
>
> Konstantine
>
>
> On Thu, Sep 16, 2021 at 6:42 PM Ismael Juma  wrote:
>
> > +1, thanks for volunteering David!
> >
> > Ismael
> >
> > On Thu, Sep 16, 2021, 6:47 AM David Jacot 
> > wrote:
> >
> > > Hello All,
> > >
> > > I'd like to volunteer to be the release manager for our next
> > > feature release, 3.1.0. If there are no objections, I'll send
> > > out the release plan soon.
> > >
> > > Regards,
> > > David
> > >
> >
>


Re: [DISCUSS] Apache Kafka 3.1.0 release

2021-09-16 Thread Konstantine Karantasis
Thanks for volunteering to run 3.1.0 David!

+1

Konstantine


On Thu, Sep 16, 2021 at 6:42 PM Ismael Juma  wrote:

> +1, thanks for volunteering David!
>
> Ismael
>
> On Thu, Sep 16, 2021, 6:47 AM David Jacot 
> wrote:
>
> > Hello All,
> >
> > I'd like to volunteer to be the release manager for our next
> > feature release, 3.1.0. If there are no objections, I'll send
> > out the release plan soon.
> >
> > Regards,
> > David
> >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #472

2021-09-16 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 422715 lines...]
[2021-09-16T18:30:05.890Z] > Task :raft:testClasses UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :connect:json:testJar
[2021-09-16T18:30:05.890Z] > Task :connect:json:testSrcJar
[2021-09-16T18:30:05.890Z] > Task :metadata:compileTestJava UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :metadata:testClasses UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :core:compileScala UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :core:classes UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :core:compileTestJava NO-SOURCE
[2021-09-16T18:30:05.890Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-09-16T18:30:05.890Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-09-16T18:30:05.890Z] 
[2021-09-16T18:30:05.890Z] > Task :streams:processMessages
[2021-09-16T18:30:05.890Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-09-16T18:30:05.890Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-09-16T18:30:05.890Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-09-16T18:30:05.890Z] 
[2021-09-16T18:30:05.890Z] > Task :streams:compileJava UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :streams:classes UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2021-09-16T18:30:05.890Z] > Task :streams:jar UP-TO-DATE
[2021-09-16T18:30:06.848Z] > Task :core:compileTestScala UP-TO-DATE
[2021-09-16T18:30:06.848Z] > Task :core:testClasses UP-TO-DATE
[2021-09-16T18:30:06.848Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-09-16T18:30:10.079Z] > Task :connect:api:javadoc
[2021-09-16T18:30:10.079Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-09-16T18:30:10.079Z] > Task :connect:api:jar UP-TO-DATE
[2021-09-16T18:30:10.079Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-09-16T18:30:10.079Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-09-16T18:30:10.079Z] > Task :connect:json:jar UP-TO-DATE
[2021-09-16T18:30:10.079Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-09-16T18:30:10.079Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-09-16T18:30:10.079Z] > Task :connect:json:publishToMavenLocal
[2021-09-16T18:30:10.079Z] > Task :connect:api:javadocJar
[2021-09-16T18:30:10.079Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-09-16T18:30:10.079Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-09-16T18:30:10.079Z] > Task :connect:api:testJar
[2021-09-16T18:30:10.079Z] > Task :connect:api:testSrcJar
[2021-09-16T18:30:10.079Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-09-16T18:30:10.079Z] > Task :connect:api:publishToMavenLocal
[2021-09-16T18:30:13.233Z] > Task :streams:javadoc
[2021-09-16T18:30:13.233Z] > Task :streams:javadocJar
[2021-09-16T18:30:13.233Z] > Task :streams:compileTestJava UP-TO-DATE
[2021-09-16T18:30:13.233Z] > Task :streams:testClasses UP-TO-DATE
[2021-09-16T18:30:13.233Z] > Task :streams:testJar
[2021-09-16T18:30:13.233Z] > Task :streams:testSrcJar
[2021-09-16T18:30:14.422Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-09-16T18:30:14.422Z] > Task :streams:publishToMavenLocal
[2021-09-16T18:30:14.422Z] > Task :clients:javadoc
[2021-09-16T18:30:14.422Z] > Task :clients:javadocJar
[2021-09-16T18:30:15.553Z] 
[2021-09-16T18:30:15.553Z] > Task :clients:srcJar
[2021-09-16T18:30:15.553Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-09-16T18:30:15.553Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_trunk/clients/src/generated/java'. Reason: 
Task ':clients:srcJar' uses this output of task ':clients:processMessages' 
without declaring an explicit or implicit dependency. This can lead to 
incorrect results being produced, depending on what order the tasks are 
executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-09-16T18:30:15.553Z] 
[2021-09-16T18:30:15.553Z] > Task :clients:testJar
[2021-09-16T18:30:16.484Z] > Task :clients:testSrcJar

Preventing Kafka crashes on disk fill-up

2021-09-16 Thread Tirtha Chatterjee
Hello everyone.

As of today, when a data disk fills up and Kafka is unable to write to it
anymore, the Kafka process crashes. In this situation, users are unable to
use Kafka APIs to recover their cluster, and have to rely on manually
cleaning up disk space.

There are multiple approaches towards addressing this issue. One of the
behaviors that users commonly want is to degrade gracefully, being unable
to produce any new data to the disk, while still retaining the ability to
call admin APIs and delete the data on disk. I am wondering what the views
of the community are on this as a default behavior. Please let me know.

I am sure this is a discussion that has happened multiple times in the past
in the community. I would love to not re-invent the wheel or jump to a
solution without context, and look to learn from the learnings from those
discussions. What are the reasons why this might be challenging
to implement? What do you think are the primary obstacles / unknowns?

-- 
Regards
Tirtha Chatterjee


[jira] [Resolved] (KAFKA-8522) Tombstones can survive forever

2021-09-16 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-8522.

Fix Version/s: 3.1.0
 Assignee: Richard Yu
   Resolution: Fixed

Finally, merged the PR to trunk. Thanks for the work, [~Yohan123].

> Tombstones can survive forever
> --
>
> Key: KAFKA-8522
> URL: https://issues.apache.org/jira/browse/KAFKA-8522
> Project: Kafka
>  Issue Type: Improvement
>  Components: log cleaner
>Reporter: Evelyn Bayes
>Assignee: Richard Yu
>Priority: Minor
> Fix For: 3.1.0
>
>
> This is a bit grey zone as to whether it's a "bug" but it is certainly 
> unintended behaviour.
>  
> Under specific conditions tombstones effectively survive forever:
>  * Small amount of throughput;
>  * min.cleanable.dirty.ratio near or at 0; and
>  * Other parameters at default.
> What  happens is all the data continuously gets cycled into the oldest 
> segment. Old records get compacted away, but the new records continuously 
> update the timestamp of the oldest segment reseting the countdown for 
> deleting tombstones.
> So tombstones build up in the oldest segment forever.
>  
> While you could "fix" this by reducing the segment size, this can be 
> undesirable as a sudden change in throughput could cause a dangerous number 
> of segments to be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 3.1.0 release

2021-09-16 Thread Ismael Juma
+1, thanks for volunteering David!

Ismael

On Thu, Sep 16, 2021, 6:47 AM David Jacot 
wrote:

> Hello All,
>
> I'd like to volunteer to be the release manager for our next
> feature release, 3.1.0. If there are no objections, I'll send
> out the release plan soon.
>
> Regards,
> David
>


[DISCUSS] Apache Kafka 3.1.0 release

2021-09-16 Thread David Jacot
Hello All,

I'd like to volunteer to be the release manager for our next
feature release, 3.1.0. If there are no objections, I'll send
out the release plan soon.

Regards,
David


[jira] [Created] (KAFKA-13306) Null config value passes validation, but fails creation

2021-09-16 Thread Laszlo Istvan Hunyady (Jira)
Laszlo Istvan Hunyady created KAFKA-13306:
-

 Summary: Null config value passes validation, but fails creation
 Key: KAFKA-13306
 URL: https://issues.apache.org/jira/browse/KAFKA-13306
 Project: Kafka
  Issue Type: Bug
Reporter: Laszlo Istvan Hunyady


When validating a connector config containing a property with a null value the 
validation passes, but when creating a connector with the same config the 
worker fails to start the connector because of an invalid config.

Steps to reproduce:

1., Send PUT request to
{\{connectRest}}/connector-plugins/FileStreamSource/config/validate

Request body:
{
 "connector.class": "FileStreamSource",
 "name": "file-source",
 "topic": "target-topic",
 "file":"/source.txt",
 "tasks.max": "1",
 "foo": null
}

Response:
{
 "name": "FileStreamSource",
 "error_count": 0,
 ...
}

2.a., Send PUT request to
{\{connectRest}}/connectors/file-source/config

Request body:
{
 "connector.class": "FileStreamSource",
 "name": "file-source",
 "topic": "target-topic",
 "file":"/source.txt",
 "tasks.max": "1",
 "foo": null
}

2.b., Send Post request to
{\{connectRest}}/connectors/

Request Body:
{
 "name": "file-source",
 "config": {
 "connector.class": "FileStreamSource",
 "name": "file-source",
 "topic": "target-topic",
 "file": "/source.txt",
 "tasks.max": "1",
 "foo": null
 }
}

Result:
Connector is created but connector fails to start, with below exception that 
indicates an invalid config:

ERROR org.apache.kafka.connect.runtime.WorkerConnector: [file-source|worker] 
WorkerConnector\{id=file-source} Error initializing connector
java.lang.ClassCastException: Non-string value found in original settings for 
key foo: null
 at 
org.apache.kafka.common.config.AbstractConfig.originalsStrings(AbstractConfig.java:234)
 at 
org.apache.kafka.connect.runtime.WorkerConnector.initialize(WorkerConnector.java:77)
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:256)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.processConnectorConfigUpdates(DistributedHerder.java:548)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:395)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)