Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #292

2021-07-06 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-760: Increase minimum value of segment.ms and segment.bytes

2021-07-06 Thread Badai Aqrandista
James

Thank you for replying. I originally thought about adding two new
broker dynamic config to allow administrator to determine the
acceptable minimum:

min.topic.segment.ms
min.topic.segment.bytes

I put this in the "rejected alternatives" section because I think
adding two more configs to the broker would be too much trouble. Or is
it not?

Based on your use case, which is a valid use case, adding these
dynamic config would still allow your use case to work, with an
additional step of lowering the dynamic config temporarily.

What do you think?

Regards
Badai


On Wed, Jul 7, 2021 at 2:06 PM James Cheng  wrote:
>
> Badai,
>
> Thanks for the KIP.
>
> We sometimes want to force compaction on a topic. This might be because there 
> is a bad record in the topic, and we want to force it to get deleted. The way 
> we do this is, we set segment.ms to a small value and write a record, in 
> order to force a segment roll. And we also set min.cleanable.dirty.ratio=0, 
> in order to trigger compaction. It's rare that we need to do it, but it 
> happens sometimes. This change would make it more difficult to do that. With 
> this KIP, we would have to write up to 1MB of data before causing the segment 
> roll, or wait an hour.
>
> Although come to think of it, if my goal is to trigger compaction, then I can 
> just write my tombstone a couple thousand times. So maybe this KIP just makes 
> it slightly more tedious, but doesn't make it impossible.
>
> Another use case is when we want to truncate a topic, so we set a small 
> segment size and set retention to almost zero, which will allow Kafka to 
> delete what is in the topic. For that, though, we could also use 
> kafka-delete-records.sh, so this KIP would not have impact on that particular 
> use case.
>
> -James
>
> > On Jul 6, 2021, at 2:23 PM, Badai Aqrandista  
> > wrote:
> >
> > Hi all
> >
> > I have just created KIP-760
> > (https://cwiki.apache.org/confluence/display/KAFKA/KIP-760%3A+Increase+minimum+value+of+segment.ms+and+segment.bytes).
> >
> > I created this KIP because I have seen so many Kafka brokers crash due
> > to small segment.ms and/or segment.bytes.
> >
> > Please let me know what you think.
> >
> > --
> > Thanks,
> > Badai
>


-- 
Thanks,
Badai


Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-07-06 Thread Soumyajit Sahu
Interesting point. You are correct that at least KIP-729 cannot validate
that.

We could propose a different KIP for that which could enforce that in the
upper layer. Personally, I would be hesitant to discard the data in that
case, but just use metrics/logs to detect those and inform the producers
about it.


On Tue, Jul 6, 2021, 9:13 PM James Cheng  wrote:

> One use case we would like is to require that producers are sending
> compressed messages. Would this KIP (or KIP-686) allow the broker to detect
> that? From looking at both KIPs, it doesn't look it would help with my
> particular use case. Both of the KIPs are at the Record-level.
>
> Thanks,
> -James
>
> > On Jun 30, 2021, at 10:05 AM, Soumyajit Sahu 
> wrote:
> >
> > Hi Nikolay,
> > Great to hear that. I'm ok with either one too.
> > I had missed noticing the KIP-686. Thanks for bringing it up.
> >
> > I have tried to keep this one simple, but hope it can cover all our
> > enterprise needs.
> >
> > Should we put this one for vote?
> >
> > Regards,
> > Soumyajit
> >
> >
> > On Wed, Jun 30, 2021, 8:50 AM Nikolay Izhikov 
> wrote:
> >
> >> Team, If we have support from committers for API to check records on the
> >> broker side let’s choose one KIP to go with and move forward to vote and
> >> implementation?
> >> I’m ready to drive implementation of this API.
> >>
> >> I’m ready to drive the implementation of this API.
> >> It seems very useful to me.
> >>
> >>> 30 июня 2021 г., в 18:04, Nikolay Izhikov 
> >> написал(а):
> >>>
> >>> Hello.
> >>>
> >>> I had a very similar proposal [1].
> >>> So, yes, I think we should have one implementation of API in the
> product.
> >>>
> >>> [1]
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
> >>>
>  30 июня 2021 г., в 17:57, Christopher Shannon <
> >> christopher.l.shan...@gmail.com> написал(а):
> 
>  I would find this feature very useful as well as adding custom
> >> validation
>  to incoming records would be nice to prevent bad data from making it
> to
> >> the
>  topic.
> 
>  On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu <
> soumyajit.s...@gmail.com
> >>>
>  wrote:
> 
> > Thanks Colin! Good call on the ApiRecordError. We could use
> > InvalidRecordException instead, and have the broker convert it
> > to ApiRecordError.
> > Modified signature below.
> >
> > interface BrokerRecordValidator {
> > /**
> >  * Validate the record for a given topic-partition.
> >  */
> >  Optional validateRecord(TopicPartition
> > topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
> > }
> >
> > On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe 
> >> wrote:
> >
> >> Hi Soumyajit,
> >>
> >> The difficult thing is deciding which fields to share and how to
> share
> >> them.  Key and value are probably the minimum we need to make this
> > useful.
> >> If we do choose to go with byte buffer, it is not necessary to also
> >> pass
> >> the size, since ByteBuffer maintains that internally.
> >>
> >> ApiRecordError is also an internal class, so it can't be used in a
> >> public
> >> API.  I think most likely if we were going to do this, we would just
> > catch
> >> an exception and use the exception text as the validation error.
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
> >>> Hi Tom,
> >>>
> >>> Makes sense. Thanks for the explanation. I get what Colin had meant
> >> earlier.
> >>>
> >>> Would a different signature for the interface work? Example below,
> >> but
> >>> please feel free to suggest alternatives if there are any
> >> possibilities
> >> of
> >>> such.
> >>>
> >>> If needed, then deprecating this and introducing a new signature
> >> would
> > be
> >>> straight-forward as both (old and new) calls could be made serially
> >> in
> >> the
> >>> LogValidator allowing a coexistence for a transition period.
> >>>
> >>> interface BrokerRecordValidator {
> >>>  /**
> >>>   * Validate the record for a given topic-partition.
> >>>   */
> >>>  Optional validateRecord(TopicPartition
> >> topicPartition,
> >>> int keySize, ByteBuffer key, int valueSize, ByteBuffer value,
> >> Header[]
> >>> headers);
> >>> }
> >>>
> >>>
> >>> On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
> > wrote:
> >>>
>  Hi Soumyajit,
> 
>  Although that class does indeed have public access at the Java
> >> level,
> >> it
>  does so only because it needs to be used by internal Kafka code
> >> which
> >> lives
>  in other packages (there isn't any more restrictive access
> modifier
> >> which
>  would work). What the project considers public Java API is
> >> determined
> >> by
>  what's included 

Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-07-06 Thread James Cheng
One use case we would like is to require that producers are sending compressed 
messages. Would this KIP (or KIP-686) allow the broker to detect that? From 
looking at both KIPs, it doesn't look it would help with my particular use 
case. Both of the KIPs are at the Record-level.

Thanks,
-James

> On Jun 30, 2021, at 10:05 AM, Soumyajit Sahu  wrote:
> 
> Hi Nikolay,
> Great to hear that. I'm ok with either one too.
> I had missed noticing the KIP-686. Thanks for bringing it up.
> 
> I have tried to keep this one simple, but hope it can cover all our
> enterprise needs.
> 
> Should we put this one for vote?
> 
> Regards,
> Soumyajit
> 
> 
> On Wed, Jun 30, 2021, 8:50 AM Nikolay Izhikov  wrote:
> 
>> Team, If we have support from committers for API to check records on the
>> broker side let’s choose one KIP to go with and move forward to vote and
>> implementation?
>> I’m ready to drive implementation of this API.
>> 
>> I’m ready to drive the implementation of this API.
>> It seems very useful to me.
>> 
>>> 30 июня 2021 г., в 18:04, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello.
>>> 
>>> I had a very similar proposal [1].
>>> So, yes, I think we should have one implementation of API in the product.
>>> 
>>> [1]
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
>>> 
 30 июня 2021 г., в 17:57, Christopher Shannon <
>> christopher.l.shan...@gmail.com> написал(а):
 
 I would find this feature very useful as well as adding custom
>> validation
 to incoming records would be nice to prevent bad data from making it to
>> the
 topic.
 
 On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu >> 
 wrote:
 
> Thanks Colin! Good call on the ApiRecordError. We could use
> InvalidRecordException instead, and have the broker convert it
> to ApiRecordError.
> Modified signature below.
> 
> interface BrokerRecordValidator {
> /**
>  * Validate the record for a given topic-partition.
>  */
>  Optional validateRecord(TopicPartition
> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
> }
> 
> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe 
>> wrote:
> 
>> Hi Soumyajit,
>> 
>> The difficult thing is deciding which fields to share and how to share
>> them.  Key and value are probably the minimum we need to make this
> useful.
>> If we do choose to go with byte buffer, it is not necessary to also
>> pass
>> the size, since ByteBuffer maintains that internally.
>> 
>> ApiRecordError is also an internal class, so it can't be used in a
>> public
>> API.  I think most likely if we were going to do this, we would just
> catch
>> an exception and use the exception text as the validation error.
>> 
>> best,
>> Colin
>> 
>> 
>> On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
>>> Hi Tom,
>>> 
>>> Makes sense. Thanks for the explanation. I get what Colin had meant
>> earlier.
>>> 
>>> Would a different signature for the interface work? Example below,
>> but
>>> please feel free to suggest alternatives if there are any
>> possibilities
>> of
>>> such.
>>> 
>>> If needed, then deprecating this and introducing a new signature
>> would
> be
>>> straight-forward as both (old and new) calls could be made serially
>> in
>> the
>>> LogValidator allowing a coexistence for a transition period.
>>> 
>>> interface BrokerRecordValidator {
>>>  /**
>>>   * Validate the record for a given topic-partition.
>>>   */
>>>  Optional validateRecord(TopicPartition
>> topicPartition,
>>> int keySize, ByteBuffer key, int valueSize, ByteBuffer value,
>> Header[]
>>> headers);
>>> }
>>> 
>>> 
>>> On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
> wrote:
>>> 
 Hi Soumyajit,
 
 Although that class does indeed have public access at the Java
>> level,
>> it
 does so only because it needs to be used by internal Kafka code
>> which
>> lives
 in other packages (there isn't any more restrictive access modifier
>> which
 would work). What the project considers public Java API is
>> determined
>> by
 what's included in the published Javadocs:
 https://kafka.apache.org/27/javadoc/index.html, which doesn't
> include
>> the
 org.apache.kafka.common.record package.
 
 One of the problems with making these internal classes public is it
>> ties
 the project into supporting them as APIs, which can make changing
> them
>> much
 harder and in the long run that can slow, or even prevent,
>> innovation
>> in
 the rest of Kafka.
 
 Kind regards,
 
 Tom
 
 
 
 On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu <
>> soumyajit.s..

Re: [DISCUSS] KIP-760: Increase minimum value of segment.ms and segment.bytes

2021-07-06 Thread James Cheng
Badai,

Thanks for the KIP.

We sometimes want to force compaction on a topic. This might be because there 
is a bad record in the topic, and we want to force it to get deleted. The way 
we do this is, we set segment.ms to a small value and write a record, in order 
to force a segment roll. And we also set min.cleanable.dirty.ratio=0, in order 
to trigger compaction. It's rare that we need to do it, but it happens 
sometimes. This change would make it more difficult to do that. With this KIP, 
we would have to write up to 1MB of data before causing the segment roll, or 
wait an hour. 

Although come to think of it, if my goal is to trigger compaction, then I can 
just write my tombstone a couple thousand times. So maybe this KIP just makes 
it slightly more tedious, but doesn't make it impossible.

Another use case is when we want to truncate a topic, so we set a small segment 
size and set retention to almost zero, which will allow Kafka to delete what is 
in the topic. For that, though, we could also use kafka-delete-records.sh, so 
this KIP would not have impact on that particular use case.

-James

> On Jul 6, 2021, at 2:23 PM, Badai Aqrandista  
> wrote:
> 
> Hi all
> 
> I have just created KIP-760
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-760%3A+Increase+minimum+value+of+segment.ms+and+segment.bytes).
> 
> I created this KIP because I have seen so many Kafka brokers crash due
> to small segment.ms and/or segment.bytes.
> 
> Please let me know what you think.
> 
> -- 
> Thanks,
> Badai



Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #291

2021-07-06 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #290

2021-07-06 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13041) Support debugging system tests

2021-07-06 Thread Stanislav Vodetskyi (Jira)
Stanislav Vodetskyi created KAFKA-13041:
---

 Summary: Support debugging system tests
 Key: KAFKA-13041
 URL: https://issues.apache.org/jira/browse/KAFKA-13041
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Stanislav Vodetskyi






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #289

2021-07-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 348601 lines...]
[2021-07-06T22:04:54.385Z] 
[2021-07-06T22:04:54.385Z] PlaintextConsumerTest > testCommitSpecifiedOffsets() 
STARTED
[2021-07-06T22:04:55.420Z] 
[2021-07-06T22:04:55.420Z] PlaintextConsumerTest > testPositionAndCommit() 
PASSED
[2021-07-06T22:04:55.420Z] 
[2021-07-06T22:04:55.420Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() STARTED
[2021-07-06T22:04:58.922Z] 
[2021-07-06T22:04:58.922Z] PlaintextConsumerTest > testCommitSpecifiedOffsets() 
PASSED
[2021-07-06T22:04:58.922Z] 
[2021-07-06T22:04:58.922Z] PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe() STARTED
[2021-07-06T22:05:00.057Z] 
[2021-07-06T22:05:00.058Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() PASSED
[2021-07-06T22:05:00.058Z] 
[2021-07-06T22:05:00.058Z] PlaintextConsumerTest > testUnsubscribeTopic() 
STARTED
[2021-07-06T22:05:04.749Z] 
[2021-07-06T22:05:04.749Z] PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe() PASSED
[2021-07-06T22:05:04.749Z] 
[2021-07-06T22:05:04.749Z] PlaintextConsumerTest > testCommitMetadata() STARTED
[2021-07-06T22:05:06.524Z] 
[2021-07-06T22:05:06.524Z] PlaintextConsumerTest > testUnsubscribeTopic() PASSED
[2021-07-06T22:05:06.524Z] 
[2021-07-06T22:05:06.524Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() STARTED
[2021-07-06T22:05:09.452Z] 
[2021-07-06T22:05:09.452Z] PlaintextConsumerTest > testCommitMetadata() PASSED
[2021-07-06T22:05:09.452Z] 
[2021-07-06T22:05:09.452Z] PlaintextConsumerTest > testRoundRobinAssignment() 
STARTED
[2021-07-06T22:05:16.425Z] 
[2021-07-06T22:05:16.425Z] PlaintextConsumerTest > testRoundRobinAssignment() 
PASSED
[2021-07-06T22:05:16.425Z] 
[2021-07-06T22:05:16.425Z] PlaintextConsumerTest > testPatternSubscription() 
STARTED
[2021-07-06T22:05:22.986Z] 
[2021-07-06T22:05:22.986Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() PASSED
[2021-07-06T22:05:22.986Z] 
[2021-07-06T22:05:22.986Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() STARTED
[2021-07-06T22:05:28.365Z] 
[2021-07-06T22:05:28.365Z] PlaintextConsumerTest > testPatternSubscription() 
PASSED
[2021-07-06T22:05:29.262Z] 
[2021-07-06T22:05:29.262Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() PASSED
[2021-07-06T22:05:29.262Z] 
[2021-07-06T22:05:29.262Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignment() STARTED
[2021-07-06T22:05:29.296Z] 
[2021-07-06T22:05:29.296Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2021-07-06T22:05:29.296Z] 
[2021-07-06T22:05:29.296Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-07-06T22:05:29.296Z] 
[2021-07-06T22:05:29.296Z] See 
https://docs.gradle.org/7.1.1/userguide/command_line_interface.html#sec:command_line_warnings
[2021-07-06T22:05:29.296Z] 
[2021-07-06T22:05:29.296Z] BUILD SUCCESSFUL in 2h 10m 6s
[2021-07-06T22:05:29.296Z] 199 actionable tasks: 107 executed, 92 up-to-date
[2021-07-06T22:05:29.296Z] 
[2021-07-06T22:05:29.296Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-07-06-19-55-27.html
[2021-07-06T22:05:29.296Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] junit
[2021-07-06T22:05:30.157Z] Recording test results
[2021-07-06T22:05:47.938Z] [Checks API] No suitable checks publisher found.
[Pipeline] echo
[2021-07-06T22:05:47.939Z] Skipping Kafka Streams archetype test for Java 16
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2021-07-06T22:05:48.148Z] 
[2021-07-06T22:05:48.148Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignment() PASSED
[2021-07-06T22:05:48.148Z] 
[2021-07-06T22:05:48.148Z] PlaintextConsumerTest > testAutoCommitOnClose() 
STARTED
[2021-07-06T22:05:53.623Z] 
[2021-07-06T22:05:53.623Z] PlaintextConsumerTest > testAutoCommitOnClose() 
PASSED
[2021-07-06T22:05:53.623Z] 
[2021-07-06T22:05:53.623Z] PlaintextConsumerTest > testListTopics() STARTED
[2021-07-06T22:05:59.439Z] 
[2021-07-06T22:05:59.439Z] PlaintextConsumerTest > testListTopics() PASSED
[2021-07-06T22:05:59.439Z] 
[2021-07-06T22:05:59.439Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() STARTED
[2021-07-06T22:06:05.087Z] 
[2021-07-06T22:06:05.087Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() PASSED
[2021-07-06T22:06:05.087Z] 
[2021-07-06T22:06:05.087Z] PlaintextConsumerTest > testInterceptors() STARTED
[2021-07-06T22:06:13.478Z] 
[2021-07-06T22:06

[DISCUSS] KIP-760: Increase minimum value of segment.ms and segment.bytes

2021-07-06 Thread Badai Aqrandista
Hi all

I have just created KIP-760
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-760%3A+Increase+minimum+value+of+segment.ms+and+segment.bytes).

I created this KIP because I have seen so many Kafka brokers crash due
to small segment.ms and/or segment.bytes.

Please let me know what you think.

-- 
Thanks,
Badai


[jira] [Resolved] (KAFKA-7760) Add broker configuration to set minimum value for segment.bytes and segment.ms

2021-07-06 Thread Badai Aqrandista (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Badai Aqrandista resolved KAFKA-7760.
-
Resolution: Duplicate

> Add broker configuration to set minimum value for segment.bytes and segment.ms
> --
>
> Key: KAFKA-7760
> URL: https://issues.apache.org/jira/browse/KAFKA-7760
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Badai Aqrandista
>Assignee: Dulvin Witharane
>Priority: Major
>  Labels: kip, newbie
>
> If someone set segment.bytes or segment.ms at topic level to a very small 
> value (e.g. segment.bytes=1000 or segment.ms=1000), Kafka will generate a 
> very high number of segment files. This can bring down the whole broker due 
> to hitting the maximum open file (for log) or maximum number of mmap-ed file 
> (for index).
> To prevent that from happening, I would like to suggest adding two new items 
> to the broker configuration:
>  * min.topic.segment.bytes, defaults to 1048576: The minimum value for 
> segment.bytes. When someone sets topic configuration segment.bytes to a value 
> lower than this, Kafka throws an error INVALID VALUE.
>  * min.topic.segment.ms, defaults to 360: The minimum value for 
> segment.ms. When someone sets topic configuration segment.ms to a value lower 
> than this, Kafka throws an error INVALID VALUE.
> Thanks
> Badai



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13040) Increase minimum value of segment.ms and segment.bytes

2021-07-06 Thread Badai Aqrandista (Jira)
Badai Aqrandista created KAFKA-13040:


 Summary: Increase minimum value of segment.ms and segment.bytes
 Key: KAFKA-13040
 URL: https://issues.apache.org/jira/browse/KAFKA-13040
 Project: Kafka
  Issue Type: Improvement
Reporter: Badai Aqrandista


Many times, Kafka brokers in production crash with "Too many open files" error 
or "Out of memory" errors because some Kafka topics have a lot of segment files 
as a result of small {{segment.ms}} or {{segment.bytes}}. These two 
configuration can be set by any user who is authorized to create topic or 
modify topic configuration.

To prevent these two configuration from causing Kafka broker crash, they should 
have a minimum value that is big enough.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: New release branch 3.0

2021-07-06 Thread Israel Ekpo
Thanks for the heads up Konstantine:

I am currently working on these issues below and should send new PRs by the
end of the week.

My changes should be merged into the 3.0 branch as well as trunk. I have
marked them as "blockers" for tracking purposes and for the PR merges.

*Add Missing Class-Level Javadoc to Descendants of
org.apache.kafka.common.errors.ApiException*
https://issues.apache.org/jira/browse/KAFKA-12644

*Migrate all Tests to New API and Remove Suppression for Deprecation
Warnings related to KIP-633*
https://issues.apache.org/jira/browse/KAFKA-12994

*Improve Javadocs for API Changes from KIP-633*
https://issues.apache.org/jira/browse/KAFKA-13021

They are all improvements and the release can proceed without it but I
wanted to make the workflow smoother.

Please let me know if you have any questions or concerns.

Thanks.



On Tue, Jul 6, 2021 at 3:38 PM Konstantine Karantasis <
kkaranta...@apache.org> wrote:

> Hi Kafka developers and friends,
>
> The release branch for Apache Kafka 3.0 (with version 3.0.0) has been
> created
> (https://github.com/apache/kafka/tree/3.0).
>
> The trunk branch is about to be bumped to 3.1.0-SNAPSHOT via
> https://github.com/apache/kafka/pull/10981
>
> At this point, I'll be reviewing the open JIRA tickets to move every
> non-blocker from this release to the next one.
>
> Going forward, most changes should land only on the trunk branch.
>
> Blockers (existing and new that we discover while testing the release) will
> be double-committed.
> Please discuss with your reviewers whether your PR should go to trunk or to
> trunk as well as 3.0 so they can merge accordingly.
>
> The 3.0 branch going live is an excellent opportunity to focus on testing
> the features you aim to include in the upcoming release and making sure
> that tests are stable, dependable and offer good coverage.
>
> Please help us test the release and make sure it gets out the door stable
> and with high quality.
>
> Sincerely,
> Konstantine
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #288

2021-07-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 410741 lines...]
[2021-07-06T19:37:46.011Z] 
[2021-07-06T19:37:46.011Z] AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess() PASSED
[2021-07-06T19:37:46.011Z] 
[2021-07-06T19:37:46.011Z] AuthorizerIntegrationTest > 
testListTransactionsAuthorization() STARTED
[2021-07-06T19:37:49.870Z] 
[2021-07-06T19:37:49.870Z] AuthorizerIntegrationTest > 
testListTransactionsAuthorization() PASSED
[2021-07-06T19:37:49.870Z] 
[2021-07-06T19:37:49.870Z] AuthorizerIntegrationTest > 
testOffsetFetchTopicDescribe() STARTED
[2021-07-06T19:37:56.270Z] 
[2021-07-06T19:37:56.270Z] AuthorizerIntegrationTest > 
testOffsetFetchTopicDescribe() PASSED
[2021-07-06T19:37:56.270Z] 
[2021-07-06T19:37:56.270Z] AuthorizerIntegrationTest > 
testCommitWithTopicAndGroupRead() STARTED
[2021-07-06T19:37:59.223Z] 
[2021-07-06T19:37:59.223Z] AuthorizerIntegrationTest > 
testCommitWithTopicAndGroupRead() PASSED
[2021-07-06T19:37:59.223Z] 
[2021-07-06T19:37:59.223Z] AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED
[2021-07-06T19:38:03.541Z] 
[2021-07-06T19:38:03.541Z] AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED
[2021-07-06T19:38:03.541Z] 
[2021-07-06T19:38:03.541Z] AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess() STARTED
[2021-07-06T19:38:06.662Z] 
[2021-07-06T19:38:06.662Z] AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess() PASSED
[2021-07-06T19:38:06.662Z] 
[2021-07-06T19:38:06.662Z] SslProducerSendTest > 
testSendNonCompressedMessageWithCreateTime() STARTED
[2021-07-06T19:38:14.056Z] 
[2021-07-06T19:38:14.056Z] SslProducerSendTest > 
testSendNonCompressedMessageWithCreateTime() PASSED
[2021-07-06T19:38:14.056Z] 
[2021-07-06T19:38:14.056Z] SslProducerSendTest > testClose() STARTED
[2021-07-06T19:38:24.108Z] 
[2021-07-06T19:38:24.108Z] SslProducerSendTest > testClose() PASSED
[2021-07-06T19:38:24.108Z] 
[2021-07-06T19:38:24.108Z] SslProducerSendTest > testFlush() STARTED
[2021-07-06T19:38:30.858Z] 
[2021-07-06T19:38:30.858Z] SslProducerSendTest > testFlush() PASSED
[2021-07-06T19:38:30.858Z] 
[2021-07-06T19:38:30.858Z] SslProducerSendTest > testSendToPartition() STARTED
[2021-07-06T19:38:38.321Z] 
[2021-07-06T19:38:38.321Z] SslProducerSendTest > testSendToPartition() PASSED
[2021-07-06T19:38:38.321Z] 
[2021-07-06T19:38:38.321Z] SslProducerSendTest > testSendOffset() STARTED
[2021-07-06T19:38:47.324Z] 
[2021-07-06T19:38:47.324Z] SslProducerSendTest > testSendOffset() PASSED
[2021-07-06T19:38:47.324Z] 
[2021-07-06T19:38:47.324Z] SslProducerSendTest > 
testSendCompressedMessageWithCreateTime() STARTED
[2021-07-06T19:38:53.615Z] 
[2021-07-06T19:38:53.615Z] SslProducerSendTest > 
testSendCompressedMessageWithCreateTime() PASSED
[2021-07-06T19:38:53.615Z] 
[2021-07-06T19:38:53.615Z] SslProducerSendTest > 
testCloseWithZeroTimeoutFromCallerThread() STARTED
[2021-07-06T19:39:15.466Z] 
[2021-07-06T19:39:15.466Z] SslProducerSendTest > 
testCloseWithZeroTimeoutFromCallerThread() PASSED
[2021-07-06T19:39:15.466Z] 
[2021-07-06T19:39:15.466Z] SslProducerSendTest > 
testCloseWithZeroTimeoutFromSenderThread() STARTED
[2021-07-06T19:39:43.411Z] 
[2021-07-06T19:39:43.411Z] SslProducerSendTest > 
testCloseWithZeroTimeoutFromSenderThread() PASSED
[2021-07-06T19:39:43.411Z] 
[2021-07-06T19:39:43.411Z] SslProducerSendTest > 
testSendBeforeAndAfterPartitionExpansion() STARTED
[2021-07-06T19:39:55.294Z] 
[2021-07-06T19:39:55.294Z] SslProducerSendTest > 
testSendBeforeAndAfterPartitionExpansion() PASSED
[2021-07-06T19:39:55.294Z] 
[2021-07-06T19:39:55.294Z] ProducerCompressionTest > testCompression(String) > 
kafka.api.test.ProducerCompressionTest.testCompression(String)[1] STARTED
[2021-07-06T19:40:04.887Z] 
[2021-07-06T19:40:04.887Z] ProducerCompressionTest > testCompression(String) > 
kafka.api.test.ProducerCompressionTest.testCompression(String)[1] PASSED
[2021-07-06T19:40:04.887Z] 
[2021-07-06T19:40:04.887Z] ProducerCompressionTest > testCompression(String) > 
kafka.api.test.ProducerCompressionTest.testCompression(String)[2] STARTED
[2021-07-06T19:40:11.528Z] 
[2021-07-06T19:40:11.528Z] ProducerCompressionTest > testCompression(String) > 
kafka.api.test.ProducerCompressionTest.testCompression(String)[2] PASSED
[2021-07-06T19:40:11.528Z] 
[2021-07-06T19:40:11.528Z] ProducerCompressionTest > testCompression(String) > 
kafka.api.test.ProducerCompressionTest.testCompression(String)[3] STARTED
[2021-07-06T19:40:18.729Z] 
[2021-07-06T19:40:18.729Z] ProducerCompressionTest > testCompression(String) > 
kafka.api.test.ProducerCompressionTest.testCompression(String)[3] PASSED
[2021-07-06T19:40:18.729Z] 
[2021-07-06T19:40:18.729Z] ProducerCompressionTest > testCompression(String) > 
kafka.api.test.ProducerCompressionTest.testCompression(

New release branch 3.0

2021-07-06 Thread Konstantine Karantasis
Hi Kafka developers and friends,

The release branch for Apache Kafka 3.0 (with version 3.0.0) has been
created
(https://github.com/apache/kafka/tree/3.0).

The trunk branch is about to be bumped to 3.1.0-SNAPSHOT via
https://github.com/apache/kafka/pull/10981

At this point, I'll be reviewing the open JIRA tickets to move every
non-blocker from this release to the next one.

Going forward, most changes should land only on the trunk branch.

Blockers (existing and new that we discover while testing the release) will
be double-committed.
Please discuss with your reviewers whether your PR should go to trunk or to
trunk as well as 3.0 so they can merge accordingly.

The 3.0 branch going live is an excellent opportunity to focus on testing
the features you aim to include in the upcoming release and making sure
that tests are stable, dependable and offer good coverage.

Please help us test the release and make sure it gets out the door stable
and with high quality.

Sincerely,
Konstantine


[jira] [Resolved] (KAFKA-13035) Kafka Connect: Update documentation for POST /connectors/(string: name)/restart to include task Restart behavior

2021-07-06 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-13035.
---
Fix Version/s: 3.0.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` in time for 3.0.0

> Kafka Connect: Update documentation for POST /connectors/(string: 
> name)/restart to include task Restart behavior  
> --
>
> Key: KAFKA-13035
> URL: https://issues.apache.org/jira/browse/KAFKA-13035
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Kalpesh Patel
>Assignee: Kalpesh Patel
>Priority: Minor
> Fix For: 3.0.0
>
>
> KAFKA-4793 updated the behavior of POST /connectors/(string: name)/restart 
> based on queryParameters onlyFailed and includeTasks  based on 
> [KIP-475|https://cwiki.apache.org/confluence/display/KAFKA/KIP-745%3A+Connect+API+to+restart+connector+and+tasks].
>  We should update documentation to reflect this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12979) Implement --find-hanging API in transaction tool

2021-07-06 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12979.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

> Implement --find-hanging API in transaction tool
> 
>
> Key: KAFKA-12979
> URL: https://issues.apache.org/jira/browse/KAFKA-12979
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 3.0.0
>
>
> Implements the --find-hanging argument described here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-664%3A+Provide+tooling+to+detect+and+abort+hanging+transactions#KIP664:Providetoolingtodetectandaborthangingtransactions-FindingHangingTransactions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12981) Ensure LogSegment.maxTimestampSoFar and LogSegment.offsetOfMaxTimestampSoFar are read/updated in sync

2021-07-06 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-12981.
-
Fix Version/s: 3.0.0
 Reviewer: David Jacot
   Resolution: Fixed

> Ensure LogSegment.maxTimestampSoFar and LogSegment.offsetOfMaxTimestampSoFar 
> are read/updated in sync
> -
>
> Key: KAFKA-12981
> URL: https://issues.apache.org/jira/browse/KAFKA-12981
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Scott
>Assignee: Tom Scott
>Priority: Minor
> Fix For: 3.0.0
>
>
> KIP-734 extends listOffsetRequest to fetch offsets by max timestamp as well 
> as by start and end offset. This relies on LogSegment.maxTimestampSoFar and 
> LogSegment.offsetOfMaxTimestampSoFar  but there is currently no 
> synchronisation between the 2 meaning that one could be updated whilst the 
> other is being read.
> This ticket ensure that LogSegment.maxTimestampSoFar and 
> LogSegment.offsetOfMaxTimestampSoFar are locked on read to ensure 
> synchronisation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13039) kafka 0.10.1 gradle build failed

2021-07-06 Thread hantaoluo (Jira)
hantaoluo created KAFKA-13039:
-

 Summary: kafka 0.10.1 gradle build failed
 Key: KAFKA-13039
 URL: https://issues.apache.org/jira/browse/KAFKA-13039
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.1.0
 Environment: windows10 gradle3.5 or 3.7
Reporter: hantaoluo


when i use gradle 3.0 or 3.5 build kafka  it failed
 

* What went wrong:
A problem occurred evaluating root project 'kafka-0.10.1.0-src'.
> Could not find method scoverage() for arguments 
> [build_280jc8bwyhuf6s4pit1n8ckjs$_run_closure30$_closure86@799527c6] on 
> project ':core' of type org.gradle.api.Project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13038) document IdentityReplicationPolicy

2021-07-06 Thread Ryanne Dolan (Jira)
Ryanne Dolan created KAFKA-13038:


 Summary: document IdentityReplicationPolicy
 Key: KAFKA-13038
 URL: https://issues.apache.org/jira/browse/KAFKA-13038
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.0.0
Reporter: Ryanne Dolan


We should add something to Geo-Replication section of the docs to introduce 
IdentityReplicationPolicy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13037) "Thread state is already PENDING_SHUTDOWN" log spam

2021-07-06 Thread John Gray (Jira)
John Gray created KAFKA-13037:
-

 Summary: "Thread state is already PENDING_SHUTDOWN" log spam
 Key: KAFKA-13037
 URL: https://issues.apache.org/jira/browse/KAFKA-13037
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.7.1, 2.8.0
Reporter: John Gray


KAFKA-12462 introduced a 
[change|https://github.com/apache/kafka/commit/4fe4cdc4a61cbac8e070a8b5514403235194015b#diff-76f629d0df8bd30b2593cbcf4a2dc80de3167ebf55ef8b5558e6e6285a057496R722]
 that increased this "Thread state is already {}" logger to info instead of 
debug. We are running into a problem with our streams apps that when they hit 
an unrecoverable exception that shuts down the streams thread, we get this log 
printed about 50,000 times per second per thread. I am guessing it is once per 
record we have queued up when the exception happens. We have temporarily raised 
the StreamThread logger to WARN instead of INFO to avoid the spam, but we do 
miss the other good logs we get on INFO in that class. Could this log be 
reverted back to debug? Thank you! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13036) Replace EasyMock and PowerMock with Mockito for RocksDBMetricsRecorderTest

2021-07-06 Thread YI-CHEN WANG (Jira)
YI-CHEN WANG created KAFKA-13036:


 Summary: Replace EasyMock and PowerMock with Mockito for 
RocksDBMetricsRecorderTest
 Key: KAFKA-13036
 URL: https://issues.apache.org/jira/browse/KAFKA-13036
 Project: Kafka
  Issue Type: Sub-task
Reporter: YI-CHEN WANG
Assignee: YI-CHEN WANG






--
This message was sent by Atlassian Jira
(v8.3.4#803005)