[jira] [Resolved] (KAFKA-13002) listOffsets must downgrade immediately for non MAX_TIMESTAMP specs

2021-06-30 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-13002.
-
  Reviewer: David Jacot
Resolution: Fixed

> listOffsets must downgrade immediately for non MAX_TIMESTAMP specs
> --
>
> Key: KAFKA-13002
> URL: https://issues.apache.org/jira/browse/KAFKA-13002
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: John Roesler
>Assignee: Tom Scott
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: soaks.png
>
>
> Note: this is not a report against a released version of AK. It seems to be a 
> problem on the trunk development branch only.
> After deploying our soak test against `trunk/HEAD` on Friday, I noticed that 
> Streams is no longer processing:
> !soaks.png!
> I found this stacktrace in the logs during startup:
> {code:java}
> 5075 [2021-06-25T16:50:44-05:00] 
> (streams-soak-trunk-ccloud-alos_soak_i-0691913411e8c77c3_streamslog) 
> [2021-06-25 21:50:44,499] WARN [i-0691913411e8c77c3-StreamThread-1] The 
> listOffsets request failed. 
> (org.apache.kafka.streams.processor.internals.ClientUtils)
>  5076 [2021-06-25T16:50:44-05:00] 
> (streams-soak-trunk-ccloud-alos_soak_i-0691913411e8c77c3_streamslog) 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnsupportedVersionException: The broker does 
> not support LIST_OFFSETS with version in range [7,7].   The supported 
> range is [0,6].
>  5077 at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  5078 at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  5079 at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  5080 at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  5081 at 
> org.apache.kafka.streams.processor.internals.ClientUtils.getEndOffsets(ClientUtils.java:147)
>  5082 at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.populateClientStatesMap(StreamsPartitionAssignor.java:643)
>  5083 at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assignTasksToClients(StreamsPartitionAssignor.java:579)
>  5084 at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:387)
>  5085 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:589)
>  5086 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:689)
>  5087 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1000(AbstractCoordinator.java:111)
>  5088 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:593)
>  5089 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:556)
>  5090 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1178)
>  5091 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1153)
>  5092 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206)
>  5093 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169)
>  5094 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129)
>  5095 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602)
>  5096 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412)
>  5097 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
>  5098 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
>  5099 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1297)
>  5100 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
>  5101 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
>  5102 at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:932)
>  5

[jira] [Resolved] (KAFKA-13011) Update deleteTopics Admin API

2021-06-30 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13011.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

> Update deleteTopics Admin API
> -
>
> Key: KAFKA-13011
> URL: https://issues.apache.org/jira/browse/KAFKA-13011
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Major
> Fix For: 3.0.0
>
>
> Implement the new deleteTopics apis as described in the KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-4793) Kafka Connect: POST /connectors/(string: name)/restart doesn't start failed tasks

2021-06-30 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-4793.
---
Resolution: Fixed

> Kafka Connect: POST /connectors/(string: name)/restart doesn't start failed 
> tasks
> -
>
> Key: KAFKA-4793
> URL: https://issues.apache.org/jira/browse/KAFKA-4793
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>Assignee: Kalpesh Patel
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> Sometimes tasks stop due to repeated failures. Users will want to restart the 
> connector and have it retry after fixing an issue. 
> We expected "POST /connectors/(string: name)/restart" to cause retry of 
> failed tasks, but this doesn't appear to be the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #272

2021-06-30 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #271

2021-06-30 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13022) Optimize ClientQuotasImage#describe

2021-06-30 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-13022:


 Summary: Optimize ClientQuotasImage#describe
 Key: KAFKA-13022
 URL: https://issues.apache.org/jira/browse/KAFKA-13022
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe


Optimize ClientQuotasImage#describe to be more efficient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-06-30 Thread Konstantine Karantasis
Hi all,

Today we have reached the Feature Freeze milestone for Apache Kafka 3.0.
Exciting!

I'm going to allow for any pending changes to settle within the next couple
of days.
I trust that we all approve and merge adopted features and changes which we
consider to be in good shape for 3.0.

Given the 4th of July holiday in the US, the 3.0 release branch will appear
sometime on Tuesday, July 6th.
Until then, please keep merging to trunk only the changes you intend to
include in Apache Kafka 3.0.

Regards,
Konstantine


On Wed, Jun 30, 2021 at 3:25 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

>
> Done. Thanks Luke!
>
> On Tue, Jun 29, 2021 at 6:39 PM Luke Chen  wrote:
>
>> Hi Konstantine,
>> We've decided that the KIP-726 will be released in V3.1, not V3.0.
>> KIP-726: Make the "cooperative-sticky, range" as the default assignor
>>
>> Could you please remove this KIP from the 3.0 release plan wiki page?
>>
>> Thank you.
>> Luke
>>
>> On Wed, Jun 30, 2021 at 8:23 AM Konstantine Karantasis
>>  wrote:
>>
>> > Thanks for the update Colin.
>> > They are now both in the release plan.
>> >
>> > Best,
>> > Konstantine
>> >
>> > On Tue, Jun 29, 2021 at 2:55 PM Colin McCabe 
>> wrote:
>> >
>> > > Hi Konstantine,
>> > >
>> > > Can you please add two KIPs to the 3.0 release plan wiki page?
>> > >
>> > > I'm thinking of:
>> > > KIP-630: Kafka Raft Snapshots
>> > > KIP-746: Revise KRaft Metadata Records
>> > >
>> > > These are marked as 3.0 on the KIP page but I guess we don't have
>> them on
>> > > the page yet.
>> > >
>> > > Many thanks.
>> > > Colin
>> > >
>> > >
>> > > On Tue, Jun 22, 2021, at 06:29, Josep Prat wrote:
>> > > > Hi there,
>> > > >
>> > > > As the feature freeze date is approaching, I just wanted to kindly
>> ask
>> > > for
>> > > > some reviews on the already submitted PR (
>> > > > https://github.com/apache/kafka/pull/10840) that implements the
>> > approved
>> > > > KIP-744 (https://cwiki.apache.org/confluence/x/XIrOCg). The PR has
>> > been
>> > > > ready for review for 2 weeks, and I simply want to make sure there
>> is
>> > > > enough time to address any possible changes that might be requested.
>> > > >
>> > > > Thanks in advance and sorry for any inconvenience caused,
>> > > > --
>> > > > Josep
>> > > > On Mon, Jun 21, 2021 at 11:54 PM Konstantine Karantasis
>> > > >  wrote:
>> > > >
>> > > > > Thanks for the update Bruno.
>> > > > > I've moved KIP-698 to the list of postponed KIPs in the plan.
>> > > > >
>> > > > > Konstantine
>> > > > >
>> > > > > On Mon, Jun 21, 2021 at 2:30 AM Bruno Cadonna > >
>> > > wrote:
>> > > > >
>> > > > > > Hi Konstantine,
>> > > > > >
>> > > > > > The implementation of
>> > > > > >
>> > > > > > KIP-698: Add Explicit User Initialization of Broker-side State
>> to
>> > > Kafka
>> > > > > > Streams
>> > > > > >
>> > > > > > will not be ready for 3.0, so you can remove it from the list.
>> > > > > >
>> > > > > > Best,
>> > > > > > Bruno
>> > > > > >
>> > > > > > On 15.06.21 07:33, Konstantine Karantasis wrote:
>> > > > > > > Done. Moved it into the table of Adopted KIPs targeting 3.0.0
>> and
>> > > to
>> > > > > the
>> > > > > > > release plan of course.
>> > > > > > > Thanks for catching this Israel.
>> > > > > > >
>> > > > > > > Best,
>> > > > > > > Konstantine
>> > > > > > >
>> > > > > > > On Mon, Jun 14, 2021 at 7:40 PM Israel Ekpo <
>> > israele...@gmail.com>
>> > > > > > wrote:
>> > > > > > >
>> > > > > > >> Konstantine,
>> > > > > > >>
>> > > > > > >> One of mine is missing from this list
>> > > > > > >>
>> > > > > > >> KIP-633: Drop 24 hour default of grace period in Streams
>> > > > > > >> Please could you include it?
>> > > > > > >>
>> > > > > > >> Voting has already concluded a long time ago
>> > > > > > >>
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> On Mon, Jun 14, 2021 at 6:08 PM Konstantine Karantasis
>> > > > > > >>  wrote:
>> > > > > > >>
>> > > > > > >>> Hi all.
>> > > > > > >>>
>> > > > > > >>> KIP Freeze for the next major release of Apache Kafka was
>> > reached
>> > > > > last
>> > > > > > >>> week.
>> > > > > > >>>
>> > > > > > >>> As of now, 36 KIPs have concluded their voting process and
>> have
>> > > been
>> > > > > > >>> adopted.
>> > > > > > >>> These KIPs are targeting 3.0 (unless it's noted otherwise in
>> > the
>> > > > > > release
>> > > > > > >>> plan) and their inclusion as new features will be finalized
>> > right
>> > > > > after
>> > > > > > >>> Feature Freeze.
>> > > > > > >>>
>> > > > > > >>> At the high level, out of these 36 KIPs, 11 have been
>> > implemented
>> > > > > > already
>> > > > > > >>> and 25 are open or in progress.
>> > > > > > >>> Here is the full list of adopted KIPs:
>> > > > > > >>>
>> > > > > > >>> KIP-751: Drop support for Scala 2.12 in Kafka 4.0
>> (deprecate in
>> > > 3.0)
>> > > > > > >>> KIP-750: Drop support for Java 8 in Kafka 4.0 (deprecate in
>> > 3.0)
>> > > > > > >>> KIP-746: Revise KRaft Metadata Records
>> > > > > > >>> KIP-745: Connect API to restart connector and tas

Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-06-30 Thread Konstantine Karantasis
Done. Thanks Luke!

On Tue, Jun 29, 2021 at 6:39 PM Luke Chen  wrote:

> Hi Konstantine,
> We've decided that the KIP-726 will be released in V3.1, not V3.0.
> KIP-726: Make the "cooperative-sticky, range" as the default assignor
>
> Could you please remove this KIP from the 3.0 release plan wiki page?
>
> Thank you.
> Luke
>
> On Wed, Jun 30, 2021 at 8:23 AM Konstantine Karantasis
>  wrote:
>
> > Thanks for the update Colin.
> > They are now both in the release plan.
> >
> > Best,
> > Konstantine
> >
> > On Tue, Jun 29, 2021 at 2:55 PM Colin McCabe  wrote:
> >
> > > Hi Konstantine,
> > >
> > > Can you please add two KIPs to the 3.0 release plan wiki page?
> > >
> > > I'm thinking of:
> > > KIP-630: Kafka Raft Snapshots
> > > KIP-746: Revise KRaft Metadata Records
> > >
> > > These are marked as 3.0 on the KIP page but I guess we don't have them
> on
> > > the page yet.
> > >
> > > Many thanks.
> > > Colin
> > >
> > >
> > > On Tue, Jun 22, 2021, at 06:29, Josep Prat wrote:
> > > > Hi there,
> > > >
> > > > As the feature freeze date is approaching, I just wanted to kindly
> ask
> > > for
> > > > some reviews on the already submitted PR (
> > > > https://github.com/apache/kafka/pull/10840) that implements the
> > approved
> > > > KIP-744 (https://cwiki.apache.org/confluence/x/XIrOCg). The PR has
> > been
> > > > ready for review for 2 weeks, and I simply want to make sure there is
> > > > enough time to address any possible changes that might be requested.
> > > >
> > > > Thanks in advance and sorry for any inconvenience caused,
> > > > --
> > > > Josep
> > > > On Mon, Jun 21, 2021 at 11:54 PM Konstantine Karantasis
> > > >  wrote:
> > > >
> > > > > Thanks for the update Bruno.
> > > > > I've moved KIP-698 to the list of postponed KIPs in the plan.
> > > > >
> > > > > Konstantine
> > > > >
> > > > > On Mon, Jun 21, 2021 at 2:30 AM Bruno Cadonna 
> > > wrote:
> > > > >
> > > > > > Hi Konstantine,
> > > > > >
> > > > > > The implementation of
> > > > > >
> > > > > > KIP-698: Add Explicit User Initialization of Broker-side State to
> > > Kafka
> > > > > > Streams
> > > > > >
> > > > > > will not be ready for 3.0, so you can remove it from the list.
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On 15.06.21 07:33, Konstantine Karantasis wrote:
> > > > > > > Done. Moved it into the table of Adopted KIPs targeting 3.0.0
> and
> > > to
> > > > > the
> > > > > > > release plan of course.
> > > > > > > Thanks for catching this Israel.
> > > > > > >
> > > > > > > Best,
> > > > > > > Konstantine
> > > > > > >
> > > > > > > On Mon, Jun 14, 2021 at 7:40 PM Israel Ekpo <
> > israele...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > >> Konstantine,
> > > > > > >>
> > > > > > >> One of mine is missing from this list
> > > > > > >>
> > > > > > >> KIP-633: Drop 24 hour default of grace period in Streams
> > > > > > >> Please could you include it?
> > > > > > >>
> > > > > > >> Voting has already concluded a long time ago
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> On Mon, Jun 14, 2021 at 6:08 PM Konstantine Karantasis
> > > > > > >>  wrote:
> > > > > > >>
> > > > > > >>> Hi all.
> > > > > > >>>
> > > > > > >>> KIP Freeze for the next major release of Apache Kafka was
> > reached
> > > > > last
> > > > > > >>> week.
> > > > > > >>>
> > > > > > >>> As of now, 36 KIPs have concluded their voting process and
> have
> > > been
> > > > > > >>> adopted.
> > > > > > >>> These KIPs are targeting 3.0 (unless it's noted otherwise in
> > the
> > > > > > release
> > > > > > >>> plan) and their inclusion as new features will be finalized
> > right
> > > > > after
> > > > > > >>> Feature Freeze.
> > > > > > >>>
> > > > > > >>> At the high level, out of these 36 KIPs, 11 have been
> > implemented
> > > > > > already
> > > > > > >>> and 25 are open or in progress.
> > > > > > >>> Here is the full list of adopted KIPs:
> > > > > > >>>
> > > > > > >>> KIP-751: Drop support for Scala 2.12 in Kafka 4.0 (deprecate
> in
> > > 3.0)
> > > > > > >>> KIP-750: Drop support for Java 8 in Kafka 4.0 (deprecate in
> > 3.0)
> > > > > > >>> KIP-746: Revise KRaft Metadata Records
> > > > > > >>> KIP-745: Connect API to restart connector and tasks
> > > > > > >>> KIP-744: Migrate TaskMetadata and ThreadMetadata to an
> > interface
> > > with
> > > > > > >>> internal implementation
> > > > > > >>> KIP-743: Remove config value 0.10.0-2.4 of Streams built-in
> > > metrics
> > > > > > >> version
> > > > > > >>> config
> > > > > > >>> KIP-741: Change default serde to be null
> > > > > > >>> KIP-740: Clean up public API in TaskId and fix
> > > TaskMetadata#taskId()
> > > > > > >>> KIP-738: Removal of Connect's internal converter properties
> > > > > > >>> KIP-734: Improve AdminClient.listOffsets to return timestamp
> > and
> > > > > offset
> > > > > > >> for
> > > > > > >>> the record with the largest timestamp
> > > > > > >>> KIP-733: Change Kafka Streams default replication factor
> config
> > > > > > >>> KI

[jira] [Resolved] (KAFKA-13018) test_multi_version failures

2021-06-30 Thread Justine Olshan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justine Olshan resolved KAFKA-13018.

Resolution: Fixed

Fixed by [#10918|https://github.com/apache/kafka/pull/10918]

> test_multi_version failures
> ---
>
> Key: KAFKA-13018
> URL: https://issues.apache.org/jira/browse/KAFKA-13018
> Project: Kafka
>  Issue Type: Bug
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Major
>
> This system test fails with the error
> {{Exception in thread "main" joptsimple.UnrecognizedOptionException: 
> zookeeper is not a recognized option}}
> {{joptsimple.OptionException.unrecognizedOption(OptionException.java:108)}}
> {{joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510)}}
> {{joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)}}
> {{ joptsimple.OptionParser.parse(OptionParser.java:396)}}
> {{kafka.admin.TopicCommand$TopicCommandOptions.(TopicCommand.scala:516)}}
> {{kafka.admin.TopicCommand$.main(TopicCommand.scala:47)}}
> {{ kafka.admin.TopicCommand.main(TopicCommand.scala)}}
>  
> Seems like when we try to run with both 0.8.2.X version and dev version, we 
> have this issue, since {{test_0_8_2 }}which uses only 0.8.2.X passes. Likely 
> this has to do with {{create_topic}} in kafka.py and choose whether or not to 
> use ZK. Maybe it will work if we specify node 0 instead of 1 as the older 
> version node.
> {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13021) Improve Javadocs for API Changes from KIP-633

2021-06-30 Thread Israel Ekpo (Jira)
Israel Ekpo created KAFKA-13021:
---

 Summary: Improve Javadocs for API Changes from KIP-633
 Key: KAFKA-13021
 URL: https://issues.apache.org/jira/browse/KAFKA-13021
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, streams
Affects Versions: 3.0.0
Reporter: Israel Ekpo
Assignee: Israel Ekpo


There are Javadoc changes from the following PR that needs to be completed 
prior to the 3.0 release. This Jira item is to track that work

[https://github.com/apache/kafka/pull/10926]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #270

2021-06-30 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13020) SnapshotReader should decode and repor the append time in the header

2021-06-30 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-13020:
--

 Summary: SnapshotReader should decode and repor the append time in 
the header
 Key: KAFKA-13020
 URL: https://issues.apache.org/jira/browse/KAFKA-13020
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Request to be added as a contributor to Kafka JIRA board

2021-06-30 Thread Bill Bejeck
You're set up now on the wiki now as well.

On Wed, Jun 30, 2021 at 2:36 PM Bill Bejeck  wrote:

> Hi Niket,
>
> You're now a contributor on Jira, but I couldn't find "ngeol" on the
> Confluence wiki - https://cwiki.apache.org/confluence/display/KAFKA/Index
> .
> Go ahead and create an account there, and share the user name here and
> then we can update the permissions.
>
> -Bill
>
> On Wed, Jun 30, 2021 at 2:11 PM Niket Goel 
> wrote:
>
>> Hi kafka-dev team,
>>
>> Please add me as a contributor to the Kafka JIRA and Confluence wiki. My
>> JIRA username is 'Niket Goel' and my Confluent ID is ngoel.
>>
>> Thanks
>> Niket Goel
>>
>>
>>
>>


[jira] [Created] (KAFKA-13019) Add MetadataImage and MetadataDelta classes for KRaft Snapshots

2021-06-30 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-13019:


 Summary: Add MetadataImage and MetadataDelta classes for KRaft 
Snapshots
 Key: KAFKA-13019
 URL: https://issues.apache.org/jira/browse/KAFKA-13019
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe
Assignee: Colin McCabe






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Request to be added as a contributor to Kafka JIRA board

2021-06-30 Thread Bill Bejeck
Hi Niket,

You're now a contributor on Jira, but I couldn't find "ngeol" on the
Confluence wiki - https://cwiki.apache.org/confluence/display/KAFKA/Index.
Go ahead and create an account there, and share the user name here and then
we can update the permissions.

-Bill

On Wed, Jun 30, 2021 at 2:11 PM Niket Goel 
wrote:

> Hi kafka-dev team,
>
> Please add me as a contributor to the Kafka JIRA and Confluence wiki. My
> JIRA username is 'Niket Goel' and my Confluent ID is ngoel.
>
> Thanks
> Niket Goel
>
>
>
>


Request to be added as a contributor to Kafka JIRA board

2021-06-30 Thread Niket Goel
Hi kafka-dev team,

Please add me as a contributor to the Kafka JIRA and Confluence wiki. My JIRA 
username is 'Niket Goel' and my Confluent ID is ngoel.

Thanks
Niket Goel





Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-06-30 Thread Soumyajit Sahu
Hi Nikolay,
Great to hear that. I'm ok with either one too.
I had missed noticing the KIP-686. Thanks for bringing it up.

I have tried to keep this one simple, but hope it can cover all our
enterprise needs.

Should we put this one for vote?

Regards,
Soumyajit


On Wed, Jun 30, 2021, 8:50 AM Nikolay Izhikov  wrote:

> Team, If we have support from committers for API to check records on the
> broker side let’s choose one KIP to go with and move forward to vote and
> implementation?
> I’m ready to drive implementation of this API.
>
> I’m ready to drive the implementation of this API.
> It seems very useful to me.
>
> > 30 июня 2021 г., в 18:04, Nikolay Izhikov 
> написал(а):
> >
> > Hello.
> >
> > I had a very similar proposal [1].
> > So, yes, I think we should have one implementation of API in the product.
> >
> > [1]
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
> >
> >> 30 июня 2021 г., в 17:57, Christopher Shannon <
> christopher.l.shan...@gmail.com> написал(а):
> >>
> >> I would find this feature very useful as well as adding custom
> validation
> >> to incoming records would be nice to prevent bad data from making it to
> the
> >> topic.
> >>
> >> On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu  >
> >> wrote:
> >>
> >>> Thanks Colin! Good call on the ApiRecordError. We could use
> >>> InvalidRecordException instead, and have the broker convert it
> >>> to ApiRecordError.
> >>> Modified signature below.
> >>>
> >>> interface BrokerRecordValidator {
> >>>  /**
> >>>   * Validate the record for a given topic-partition.
> >>>   */
> >>>   Optional validateRecord(TopicPartition
> >>> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
> >>> }
> >>>
> >>> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe 
> wrote:
> >>>
>  Hi Soumyajit,
> 
>  The difficult thing is deciding which fields to share and how to share
>  them.  Key and value are probably the minimum we need to make this
> >>> useful.
>  If we do choose to go with byte buffer, it is not necessary to also
> pass
>  the size, since ByteBuffer maintains that internally.
> 
>  ApiRecordError is also an internal class, so it can't be used in a
> public
>  API.  I think most likely if we were going to do this, we would just
> >>> catch
>  an exception and use the exception text as the validation error.
> 
>  best,
>  Colin
> 
> 
>  On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
> > Hi Tom,
> >
> > Makes sense. Thanks for the explanation. I get what Colin had meant
>  earlier.
> >
> > Would a different signature for the interface work? Example below,
> but
> > please feel free to suggest alternatives if there are any
> possibilities
>  of
> > such.
> >
> > If needed, then deprecating this and introducing a new signature
> would
> >>> be
> > straight-forward as both (old and new) calls could be made serially
> in
>  the
> > LogValidator allowing a coexistence for a transition period.
> >
> > interface BrokerRecordValidator {
> >   /**
> >* Validate the record for a given topic-partition.
> >*/
> >   Optional validateRecord(TopicPartition
>  topicPartition,
> > int keySize, ByteBuffer key, int valueSize, ByteBuffer value,
> Header[]
> > headers);
> > }
> >
> >
> > On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
> >>> wrote:
> >
> >> Hi Soumyajit,
> >>
> >> Although that class does indeed have public access at the Java
> level,
>  it
> >> does so only because it needs to be used by internal Kafka code
> which
>  lives
> >> in other packages (there isn't any more restrictive access modifier
>  which
> >> would work). What the project considers public Java API is
> determined
>  by
> >> what's included in the published Javadocs:
> >> https://kafka.apache.org/27/javadoc/index.html, which doesn't
> >>> include
>  the
> >> org.apache.kafka.common.record package.
> >>
> >> One of the problems with making these internal classes public is it
>  ties
> >> the project into supporting them as APIs, which can make changing
> >>> them
>  much
> >> harder and in the long run that can slow, or even prevent,
> innovation
>  in
> >> the rest of Kafka.
> >>
> >> Kind regards,
> >>
> >> Tom
> >>
> >>
> >>
> >> On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu <
>  soumyajit.s...@gmail.com>
> >> wrote:
> >>
> >>> Hi Colin,
> >>> I see that both the interface "Record" and the implementation
> >>> "DefaultRecord" being used in LogValidator.java are public
> >>> interfaces/classes.
> >>>
> >>>
> >>>
> >>
> 
> >>>
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
> >>> and
> >>>
> >>>
> >>

[jira] [Created] (KAFKA-13018) test_multi_version failures

2021-06-30 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-13018:
--

 Summary: test_multi_version failures
 Key: KAFKA-13018
 URL: https://issues.apache.org/jira/browse/KAFKA-13018
 Project: Kafka
  Issue Type: Bug
Reporter: Justine Olshan


This system test fails with the error


{{Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper 
is not a recognized option}}

{{joptsimple.OptionException.unrecognizedOption(OptionException.java:108)}}

{{joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510)}}

{{joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)}}

{{ joptsimple.OptionParser.parse(OptionParser.java:396)}}

{{kafka.admin.TopicCommand$TopicCommandOptions.(TopicCommand.scala:516)}}

{{kafka.admin.TopicCommand$.main(TopicCommand.scala:47)}}

{{ kafka.admin.TopicCommand.main(TopicCommand.scala)}}

 

Seems like when we try to run with both 0.8.2.X version and dev version, we 
have this issue, since {{test_0_8_2 }}which uses only 0.8.2.X passes. Likely 
this has to do with {{create_topic}} in kafka.py and choose whether or not to 
use ZK. Maybe it will work if we specify node 0 instead of 1 as the older 
version node.

{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-06-30 Thread Nikolay Izhikov
Team, If we have support from committers for API to check records on the broker 
side let’s choose one KIP to go with and move forward to vote and 
implementation?
I’m ready to drive implementation of this API.

I’m ready to drive the implementation of this API.
It seems very useful to me. 

> 30 июня 2021 г., в 18:04, Nikolay Izhikov  написал(а):
> 
> Hello.
> 
> I had a very similar proposal [1].
> So, yes, I think we should have one implementation of API in the product.
> 
> [1] 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
> 
>> 30 июня 2021 г., в 17:57, Christopher Shannon 
>>  написал(а):
>> 
>> I would find this feature very useful as well as adding custom validation
>> to incoming records would be nice to prevent bad data from making it to the
>> topic.
>> 
>> On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu 
>> wrote:
>> 
>>> Thanks Colin! Good call on the ApiRecordError. We could use
>>> InvalidRecordException instead, and have the broker convert it
>>> to ApiRecordError.
>>> Modified signature below.
>>> 
>>> interface BrokerRecordValidator {
>>>  /**
>>>   * Validate the record for a given topic-partition.
>>>   */
>>>   Optional validateRecord(TopicPartition
>>> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
>>> }
>>> 
>>> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe  wrote:
>>> 
 Hi Soumyajit,
 
 The difficult thing is deciding which fields to share and how to share
 them.  Key and value are probably the minimum we need to make this
>>> useful.
 If we do choose to go with byte buffer, it is not necessary to also pass
 the size, since ByteBuffer maintains that internally.
 
 ApiRecordError is also an internal class, so it can't be used in a public
 API.  I think most likely if we were going to do this, we would just
>>> catch
 an exception and use the exception text as the validation error.
 
 best,
 Colin
 
 
 On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
> Hi Tom,
> 
> Makes sense. Thanks for the explanation. I get what Colin had meant
 earlier.
> 
> Would a different signature for the interface work? Example below, but
> please feel free to suggest alternatives if there are any possibilities
 of
> such.
> 
> If needed, then deprecating this and introducing a new signature would
>>> be
> straight-forward as both (old and new) calls could be made serially in
 the
> LogValidator allowing a coexistence for a transition period.
> 
> interface BrokerRecordValidator {
>   /**
>* Validate the record for a given topic-partition.
>*/
>   Optional validateRecord(TopicPartition
 topicPartition,
> int keySize, ByteBuffer key, int valueSize, ByteBuffer value, Header[]
> headers);
> }
> 
> 
> On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
>>> wrote:
> 
>> Hi Soumyajit,
>> 
>> Although that class does indeed have public access at the Java level,
 it
>> does so only because it needs to be used by internal Kafka code which
 lives
>> in other packages (there isn't any more restrictive access modifier
 which
>> would work). What the project considers public Java API is determined
 by
>> what's included in the published Javadocs:
>> https://kafka.apache.org/27/javadoc/index.html, which doesn't
>>> include
 the
>> org.apache.kafka.common.record package.
>> 
>> One of the problems with making these internal classes public is it
 ties
>> the project into supporting them as APIs, which can make changing
>>> them
 much
>> harder and in the long run that can slow, or even prevent, innovation
 in
>> the rest of Kafka.
>> 
>> Kind regards,
>> 
>> Tom
>> 
>> 
>> 
>> On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu <
 soumyajit.s...@gmail.com>
>> wrote:
>> 
>>> Hi Colin,
>>> I see that both the interface "Record" and the implementation
>>> "DefaultRecord" being used in LogValidator.java are public
>>> interfaces/classes.
>>> 
>>> 
>>> 
>> 
 
>>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
>>> and
>>> 
>>> 
>> 
 
>>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
>>> 
>>> So, it should be ok to use them. Let me know what you think.
>>> 
>>> Thanks,
>>> Soumyajit
>>> 
>>> 
>>> On Fri, Apr 2, 2021 at 8:51 AM Colin McCabe 
 wrote:
>>> 
 Hi Soumyajit,
 
 I believe we've had discussions about proposals similar to this
 before,
 although I'm having trouble finding one right now.  The issue
>>> here
 is
>>> that
 Record is a private class -- it is not part of any 

[jira] [Created] (KAFKA-13017) Excessive logging on sink task deserialization errors

2021-06-30 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-13017:
-

 Summary: Excessive logging on sink task deserialization errors
 Key: KAFKA-13017
 URL: https://issues.apache.org/jira/browse/KAFKA-13017
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.8.0, 2.7.0, 3.0.0
Reporter: Chris Egerton
Assignee: Chris Egerton


Even with {{errors.log.enable}} set to {{false}}, deserialization failures are 
still logged at {{ERROR}} level by the 
{{org.apache.kafka.connect.runtime.WorkerSinkTask}} namespace. This becomes 
problematic in pipelines with {{errors.tolerance}} set to {{all}}, and can 
generate excessive logging of stack traces when deserialization errors are 
encountered for most if not all of the records being consumed by a sink task.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13016) Interpret snapshot header version to correctly parse the snapshot

2021-06-30 Thread Niket Goel (Jira)
Niket Goel created KAFKA-13016:
--

 Summary: Interpret snapshot header version to correctly parse the 
snapshot
 Key: KAFKA-13016
 URL: https://issues.apache.org/jira/browse/KAFKA-13016
 Project: Kafka
  Issue Type: Sub-task
Reporter: Niket Goel


https://issues.apache.org/jira/browse/KAFKA-12952 adds delimiters to the 
snapshot files. These delimiters serve as start and end markers for the 
snapshots and also contain some metadata information about the snapshots. The 
snapshot consumers need to interpret the version within the header to correctly 
parse the schema of the snapshot being consumed or throw meaningful errors when 
consuming incompatible snapshot versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-06-30 Thread Nikolay Izhikov
Hello.

I had a very similar proposal [1].
So, yes, I think we should have one implementation of API in the product.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker

> 30 июня 2021 г., в 17:57, Christopher Shannon 
>  написал(а):
> 
> I would find this feature very useful as well as adding custom validation
> to incoming records would be nice to prevent bad data from making it to the
> topic.
> 
> On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu 
> wrote:
> 
>> Thanks Colin! Good call on the ApiRecordError. We could use
>> InvalidRecordException instead, and have the broker convert it
>> to ApiRecordError.
>> Modified signature below.
>> 
>> interface BrokerRecordValidator {
>>   /**
>>* Validate the record for a given topic-partition.
>>*/
>>Optional validateRecord(TopicPartition
>> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
>> }
>> 
>> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe  wrote:
>> 
>>> Hi Soumyajit,
>>> 
>>> The difficult thing is deciding which fields to share and how to share
>>> them.  Key and value are probably the minimum we need to make this
>> useful.
>>> If we do choose to go with byte buffer, it is not necessary to also pass
>>> the size, since ByteBuffer maintains that internally.
>>> 
>>> ApiRecordError is also an internal class, so it can't be used in a public
>>> API.  I think most likely if we were going to do this, we would just
>> catch
>>> an exception and use the exception text as the validation error.
>>> 
>>> best,
>>> Colin
>>> 
>>> 
>>> On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
 Hi Tom,
 
 Makes sense. Thanks for the explanation. I get what Colin had meant
>>> earlier.
 
 Would a different signature for the interface work? Example below, but
 please feel free to suggest alternatives if there are any possibilities
>>> of
 such.
 
 If needed, then deprecating this and introducing a new signature would
>> be
 straight-forward as both (old and new) calls could be made serially in
>>> the
 LogValidator allowing a coexistence for a transition period.
 
 interface BrokerRecordValidator {
/**
 * Validate the record for a given topic-partition.
 */
Optional validateRecord(TopicPartition
>>> topicPartition,
 int keySize, ByteBuffer key, int valueSize, ByteBuffer value, Header[]
 headers);
 }
 
 
 On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
>> wrote:
 
> Hi Soumyajit,
> 
> Although that class does indeed have public access at the Java level,
>>> it
> does so only because it needs to be used by internal Kafka code which
>>> lives
> in other packages (there isn't any more restrictive access modifier
>>> which
> would work). What the project considers public Java API is determined
>>> by
> what's included in the published Javadocs:
> https://kafka.apache.org/27/javadoc/index.html, which doesn't
>> include
>>> the
> org.apache.kafka.common.record package.
> 
> One of the problems with making these internal classes public is it
>>> ties
> the project into supporting them as APIs, which can make changing
>> them
>>> much
> harder and in the long run that can slow, or even prevent, innovation
>>> in
> the rest of Kafka.
> 
> Kind regards,
> 
> Tom
> 
> 
> 
> On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu <
>>> soumyajit.s...@gmail.com>
> wrote:
> 
>> Hi Colin,
>> I see that both the interface "Record" and the implementation
>> "DefaultRecord" being used in LogValidator.java are public
>> interfaces/classes.
>> 
>> 
>> 
> 
>>> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
>> and
>> 
>> 
> 
>>> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
>> 
>> So, it should be ok to use them. Let me know what you think.
>> 
>> Thanks,
>> Soumyajit
>> 
>> 
>> On Fri, Apr 2, 2021 at 8:51 AM Colin McCabe 
>>> wrote:
>> 
>>> Hi Soumyajit,
>>> 
>>> I believe we've had discussions about proposals similar to this
>>> before,
>>> although I'm having trouble finding one right now.  The issue
>> here
>>> is
>> that
>>> Record is a private class -- it is not part of any public API,
>> and
>>> may
>>> change at any time.  So we can't expose it in public APIs.
>>> 
>>> best,
>>> Colin
>>> 
>>> 
>>> On Thu, Apr 1, 2021, at 14:18, Soumyajit Sahu wrote:
 Hello All,
 I would like to start a discussion on the KIP-729.
 
 
>>> 
>> 
> 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-729%3A+Custom+validation+of+records+on+the+broker+prior+to+log+append
 
 Thanks!
 So

Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-06-30 Thread Christopher Shannon
I would find this feature very useful as well as adding custom validation
to incoming records would be nice to prevent bad data from making it to the
topic.

On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu 
wrote:

> Thanks Colin! Good call on the ApiRecordError. We could use
> InvalidRecordException instead, and have the broker convert it
> to ApiRecordError.
> Modified signature below.
>
> interface BrokerRecordValidator {
>/**
> * Validate the record for a given topic-partition.
> */
> Optional validateRecord(TopicPartition
> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
> }
>
> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe  wrote:
>
> > Hi Soumyajit,
> >
> > The difficult thing is deciding which fields to share and how to share
> > them.  Key and value are probably the minimum we need to make this
> useful.
> > If we do choose to go with byte buffer, it is not necessary to also pass
> > the size, since ByteBuffer maintains that internally.
> >
> > ApiRecordError is also an internal class, so it can't be used in a public
> > API.  I think most likely if we were going to do this, we would just
> catch
> > an exception and use the exception text as the validation error.
> >
> > best,
> > Colin
> >
> >
> > On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
> > > Hi Tom,
> > >
> > > Makes sense. Thanks for the explanation. I get what Colin had meant
> > earlier.
> > >
> > > Would a different signature for the interface work? Example below, but
> > > please feel free to suggest alternatives if there are any possibilities
> > of
> > > such.
> > >
> > > If needed, then deprecating this and introducing a new signature would
> be
> > > straight-forward as both (old and new) calls could be made serially in
> > the
> > > LogValidator allowing a coexistence for a transition period.
> > >
> > > interface BrokerRecordValidator {
> > > /**
> > >  * Validate the record for a given topic-partition.
> > >  */
> > > Optional validateRecord(TopicPartition
> > topicPartition,
> > > int keySize, ByteBuffer key, int valueSize, ByteBuffer value, Header[]
> > > headers);
> > > }
> > >
> > >
> > > On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
> wrote:
> > >
> > > > Hi Soumyajit,
> > > >
> > > > Although that class does indeed have public access at the Java level,
> > it
> > > > does so only because it needs to be used by internal Kafka code which
> > lives
> > > > in other packages (there isn't any more restrictive access modifier
> > which
> > > > would work). What the project considers public Java API is determined
> > by
> > > > what's included in the published Javadocs:
> > > > https://kafka.apache.org/27/javadoc/index.html, which doesn't
> include
> > the
> > > > org.apache.kafka.common.record package.
> > > >
> > > > One of the problems with making these internal classes public is it
> > ties
> > > > the project into supporting them as APIs, which can make changing
> them
> > much
> > > > harder and in the long run that can slow, or even prevent, innovation
> > in
> > > > the rest of Kafka.
> > > >
> > > > Kind regards,
> > > >
> > > > Tom
> > > >
> > > >
> > > >
> > > > On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu <
> > soumyajit.s...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Colin,
> > > > > I see that both the interface "Record" and the implementation
> > > > > "DefaultRecord" being used in LogValidator.java are public
> > > > > interfaces/classes.
> > > > >
> > > > >
> > > > >
> > > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
> > > > > and
> > > > >
> > > > >
> > > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
> > > > >
> > > > > So, it should be ok to use them. Let me know what you think.
> > > > >
> > > > > Thanks,
> > > > > Soumyajit
> > > > >
> > > > >
> > > > > On Fri, Apr 2, 2021 at 8:51 AM Colin McCabe 
> > wrote:
> > > > >
> > > > > > Hi Soumyajit,
> > > > > >
> > > > > > I believe we've had discussions about proposals similar to this
> > before,
> > > > > > although I'm having trouble finding one right now.  The issue
> here
> > is
> > > > > that
> > > > > > Record is a private class -- it is not part of any public API,
> and
> > may
> > > > > > change at any time.  So we can't expose it in public APIs.
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > > > >
> > > > > > On Thu, Apr 1, 2021, at 14:18, Soumyajit Sahu wrote:
> > > > > > > Hello All,
> > > > > > > I would like to start a discussion on the KIP-729.
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-729%3A+Custom+validation+of+records+on+the+broker+prior+to+log+append
> > > > > > >
> > > > > > > Thanks!
> > > > > > > Soumyajit
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-13015) Create System tests for Metadata Snapshots

2021-06-30 Thread Niket Goel (Jira)
Niket Goel created KAFKA-13015:
--

 Summary: Create System tests for Metadata Snapshots
 Key: KAFKA-13015
 URL: https://issues.apache.org/jira/browse/KAFKA-13015
 Project: Kafka
  Issue Type: Sub-task
Reporter: Niket Goel






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #269

2021-06-30 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #268

2021-06-30 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13014) KAFKA-Stream stucked when the offset is no more existing

2021-06-30 Thread Ahmed Toumi (Jira)
Ahmed Toumi created KAFKA-13014:
---

 Summary: KAFKA-Stream stucked when the offset is no more existing
 Key: KAFKA-13014
 URL: https://issues.apache.org/jira/browse/KAFKA-13014
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer, offset manager, streams
Affects Versions: 2.7.0
 Environment: PROD
Reporter: Ahmed Toumi
 Attachments: image-2021-06-30-11-10-31-028.png

We have kafka-stream with multiple instances and threads.

This kafka-stream consume from a lot of topics.

One of the topic partitions wasn't accessible for a day and the retention of 
the topic is 4 Hours.

After fixing the problem, the kafka-stream is trying to consume from an offset 
that does ot exist anymore:
 * Kafka-consumer-group describe:

!image-2021-06-30-11-10-31-028.png!

We can see that the current offset that the KS is waiting for is *59754934* but 
the new first offset of this topic is *264896001*.

The problem that the Kafka-stream does not throw any exception

that's the only log what i'm seeing 

 
{code:java}
08:44:53.924 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2] 
INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
clientId=talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2-consumer,
 groupId=talaria-data-mixed-prod] Updating assignment with08:44:53.924 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2] 
INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
clientId=talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2-consumer,
 groupId=talaria-data-mixed-prod] Updating assignment with Assigned partitions: 
                      [adm__article_ean_repartition_v3-10, 
adm__article_itm_repartition_v3-10, adm__article_sign_repartition_v3-10, 
adm__article_stock_repartition_v3-10] Current owned partitions:                 
 [adm__article_ean_repartition_v3-10, adm__article_itm_repartition_v3-10, 
adm__article_sign_repartition_v3-10, adm__article_stock_repartition_v3-10] 
Added partitions (assigned - owned):       [] Revoked partitions (owned - 
assigned):     [] 08:44:53.924 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2] 
INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
clientId=talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2-consumer,
 groupId=talaria-data-mixed-prod] Notifying assignor about the new 
Assignment(partitions=[adm__article_stock_repartition_v3-10, 
adm__article_sign_repartition_v3-10, adm__article_itm_repartition_v3-10, 
adm__article_ean_repartition_v3-10], userDataSize=398)08:44:53.924 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2] 
INFO  o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2-consumer]
 No followup rebalance was requested, resetting the rebalance 
schedule.08:44:53.924 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2] 
INFO  o.a.k.s.p.internals.TaskManager - stream-thread 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2] 
Handle new assignment with: New active tasks: [0_10] New standby tasks: [0_17, 
0_21] Existing active tasks: [0_10] Existing standby tasks: [0_17, 
0_21]08:44:53.924 
[talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2] 
INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
clientId=talaria-data-mixed-prod-c3d6ac16-516c-49ee-a34e-bde5f3f629dc-StreamThread-2-consumer,
 groupId=talaria-data-mixed-prod] Adding newly assigned partitions: 
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12921) Upgrade ZSTD JNI from 1.4.9-1 to 1.5.0-1

2021-06-30 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-12921.
-
Resolution: Fixed

> Upgrade ZSTD JNI from 1.4.9-1 to 1.5.0-1
> 
>
> Key: KAFKA-12921
> URL: https://issues.apache.org/jira/browse/KAFKA-12921
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: David Christle
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)