Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #352

2021-07-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 485971 lines...]
[2021-07-22T02:51:40.139Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly() STARTED
[2021-07-22T02:51:42.914Z] 
[2021-07-22T02:51:42.914Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly() PASSED
[2021-07-22T02:51:42.914Z] 
[2021-07-22T02:51:42.914Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition() STARTED
[2021-07-22T02:51:46.471Z] 
[2021-07-22T02:51:46.471Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition() PASSED
[2021-07-22T02:51:46.471Z] 
[2021-07-22T02:51:46.471Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsNonExistingGroup() STARTED
[2021-07-22T02:51:48.211Z] 
[2021-07-22T02:51:48.212Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsNonExistingGroup() PASSED
[2021-07-22T02:51:48.212Z] 
[2021-07-22T02:51:48.212Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() STARTED
[2021-07-22T02:51:51.767Z] 
[2021-07-22T02:51:51.767Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() PASSED
[2021-07-22T02:51:51.767Z] 
[2021-07-22T02:51:51.767Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() STARTED
[2021-07-22T02:51:56.353Z] 
[2021-07-22T02:51:56.353Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() PASSED
[2021-07-22T02:51:56.353Z] 
[2021-07-22T02:51:56.353Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-07-22T02:52:01.104Z] 
[2021-07-22T02:52:01.104Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-07-22T02:52:01.104Z] 
[2021-07-22T02:52:01.104Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() STARTED
[2021-07-22T02:52:04.745Z] 
[2021-07-22T02:52:04.745Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() PASSED
[2021-07-22T02:52:04.745Z] 
[2021-07-22T02:52:04.745Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() STARTED
[2021-07-22T02:52:09.331Z] 
[2021-07-22T02:52:09.331Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() PASSED
[2021-07-22T02:52:09.331Z] 
[2021-07-22T02:52:09.331Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() STARTED
[2021-07-22T02:52:19.458Z] 
[2021-07-22T02:52:19.458Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() PASSED
[2021-07-22T02:52:19.458Z] 
[2021-07-22T02:52:19.458Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() STARTED
[2021-07-22T02:52:22.224Z] 
[2021-07-22T02:52:22.224Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() PASSED
[2021-07-22T02:52:22.224Z] 
[2021-07-22T02:52:22.224Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() STARTED
[2021-07-22T02:52:26.014Z] 
[2021-07-22T02:52:26.014Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() PASSED
[2021-07-22T02:52:26.014Z] 
[2021-07-22T02:52:26.014Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-07-22T02:52:29.578Z] 
[2021-07-22T02:52:29.578Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-07-22T02:52:29.578Z] 
[2021-07-22T02:52:29.578Z] TopicCommandIntegrationTest > 
testListTopicsWithExcludeInternal() STARTED
[2021-07-22T02:52:34.164Z] 
[2021-07-22T02:52:34.164Z] TopicCommandIntegrationTest > 
testListTopicsWithExcludeInternal() PASSED
[2021-07-22T02:52:34.164Z] 
[2021-07-22T02:52:34.164Z] TopicCommandIntegrationTest > 
testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress() STARTED
[2021-07-22T02:52:37.719Z] 
[2021-07-22T02:52:37.719Z] TopicCommandIntegrationTest > 
testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress() PASSED
[2021-07-22T02:52:37.719Z] 
[2021-07-22T02:52:37.719Z] TopicCommandIntegrationTest > 
testCreateWithNegativePartitionCount() STARTED
[2021-07-22T02:52:41.275Z] 
[2021-07-22T02:52:41.275Z] TopicCommandIntegrationTest > 
testCreateWithNegativePartitionCount() PASSED
[2021-07-22T02:52:41.275Z] 
[2021-07-22T02:52:41.275Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExist() STARTED
[2021-07-22T02:52:47.220Z] 
[2021-07-22T02:52:47.220Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExist() PASSED
[2021-07-22T02:52:47.220Z] 
[2021-07-22T02:52:47.220Z] TopicCommandIntegrationTest > 
testCreateAlterTopicWithRackAware() STARTED
[2021-07-22T02:52:54.194Z] 
[2021-07-22T02:52:54.194Z] TopicCommandIntegrationTest > 

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.0 #57

2021-07-21 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-763: Range queries with open endpoints

2021-07-21 Thread Luke Chen
Hi Patrick,
I like this KIP!

+1 (non-binding)

Luke

On Thu, Jul 22, 2021 at 7:04 AM Matthias J. Sax  wrote:

> Thanks for the KIP.
>
> +1 (binding)
>
>
> -Matthias
>
> On 7/21/21 1:18 PM, Patrick Stuedi wrote:
> > Hi all,
> >
> > Thanks for the feedback on the KIP, I have updated the KIP and would like
> > to start the voting.
> >
> > The KIP can be found here:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-763%3A+Range+queries+with+open+endpoints
> >
> > Please vote in this thread.
> >
> > Thanks!
> > -Patrick
> >
>


Re: [DISCUSS] KIP-761: Add total blocked time metric to streams

2021-07-21 Thread Sophie Blee-Goldman
Hey Rohan,

The current metrics proposed in the KIP all LGTM, but if the goal is to
include *all* time spent
blocking on any client API then there are a few that might need to be added
to this list. For
example Consumer#committed is called in StreamTask to get the offsets
during initialization,
and several blocking AdminClient APIs are used by Streams such as
#listOffsets (as well as
 #createTopics and #describeTopics, but only during a rebalance to
create/validate the internal
topics)

One thing that wasn't quite clear to me was whether this metric is intended
to reflect only the
time spent blocking on Kafka *while the StreamThread is in RUNNING*, or
just the total time
spent inside any Kafka client API after the thread has started up. If we
only care about the
former, then the current proposal is sufficient, but if we're actually
targeting the latter then
we need to consider those additional APIs listed above.

Personally, I feel that both could actually be useful -- ie, we expose one
metric that only
corresponds to time we spend blocked on Kafka that could otherwise be spent
processing
records from active tasks, and a second metric that reports all time spent
blocking on Kafka,
whether the thread was still initializing (ie in the PARTITIONS_ASSIGNED
state) or actively
processing (ie in RUNNING). WDYT?


On Tue, Jul 20, 2021 at 11:49 PM Rohan Desai 
wrote:

> > I remember now that we moved the round-trip PID's txn completion logic
> into
> init-transaction and commit/abort-transaction. So I think we'd count time
> as in StreamsProducer#initTransaction as well (admittedly it is in most
> cases a one-time thing).
>
> Makes sense - I'll update the KIP
>
> On Tue, Jul 20, 2021 at 11:48 PM Rohan Desai 
> wrote:
>
> >
> > > I had a question - it seems like from the descriptionsof
> > `txn-commit-time-total` and `offset-commit-time-total` that they measure
> > similar processes for ALOS and EOS, but only `txn-commit-time-total` is
> > included in `blocked-time-total`. Why isn't `offset-commit-time-total`
> also
> > included?
> >
> > I've updated the KIP to include it.
> >
> > > Aside from `flush-time-total`, `txn-commit-time-total` and
> > `offset-commit-time-total`, which will be producer/consumer client
> metrics,
> > the rest of the metrics will be streams metrics that will be thread
> level,
> > is that right?
> >
> > Based on the feedback from Guozhang, I've updated the KIP to reflect that
> > the lower-level metrics are all client metrics that are then summed to
> > compute the blocked time metric, which is a Streams metric.
> >
> > On Tue, Jul 20, 2021 at 11:58 AM Rohan Desai 
> > wrote:
> >
> >> > Similarly, I think "txn-commit-time-total" and
> >> "offset-commit-time-total" may better be inside producer and consumer
> >> clients respectively.
> >>
> >> I agree for offset-commit-time-total. For txn-commit-time-total I'm
> >> proposing we measure `StreamsProducer.commitTransaction`, which wraps
> >> multiple producer calls (sendOffsets, commitTransaction)
> >>
> >> > > For "txn-commit-time-total" specifically, besides
> producer.commitTxn.
> >> other txn-related calls may also be blocking, including
> >> producer.beginTxn/abortTxn, I saw you mentioned "txn-begin-time-total"
> >> later in the doc, but did not include it as a separate metric, and
> >> similarly, should we have a `txn-abort-time-total` as well? If yes,
> could
> >> you update the KIP page accordingly.
> >>
> >> `beginTransaction` is not blocking - I meant to remove that from that
> >> doc. I'll add something for abort.
> >>
> >> On Mon, Jul 19, 2021 at 11:55 PM Rohan Desai 
> >> wrote:
> >>
> >>> Thanks for the review Guozhang! responding to your feedback inline:
> >>>
> >>> > 1) I agree that the current ratio metrics is just "snapshot in
> >>> point", and
> >>> more flexible metrics that would allow reporters to calculate based on
> >>> window intervals are better. However, the current mechanism of the
> >>> proposed
> >>> metrics assumes the thread->clients mapping as of today, where each
> >>> thread
> >>> would own exclusively one main consumer, restore consumer, producer and
> >>> an
> >>> admin client. But this mapping may be subject to change in the future.
> >>> Have
> >>> you thought about how this metric can be extended when, e.g. the
> embedded
> >>> clients and stream threads are de-coupled?
> >>>
> >>> Of course this depends on how exactly we refactor the runtime -
> assuming
> >>> that we plan to factor out consumers into an "I/O" layer that is
> >>> responsible for receiving records and enqueuing them to be processed by
> >>> processing threads, then I think it should be reasonable to count the
> time
> >>> we spend blocked on this internal queue(s) as blocked. The main concern
> >>> there to me is that the I/O layer would be doing something expensive
> like
> >>> decompression that shouldn't be counted as "blocked". But if that
> really is
> >>> so expensive that it starts to throw off our ratios then it's probably
> >>> 

Re: [DISCUSS] KIP-714: Client metrics and observability

2021-07-21 Thread Colin McCabe
On Tue, Jun 29, 2021, at 07:22, Magnus Edenhill wrote:
> Den tors 17 juni 2021 kl 00:52 skrev Colin McCabe :
> > A few critiques:
> >
> > - As I wrote above, I think this could benefit a lot by being split into
> > several RPCs. A registration RPC, a report RPC, and an unregister RPC seem
> > like logical choices.
> >
> 
> Responded to this in your previous mail, but in short I think a single
> request is sufficient and keeps the implementation complexity / state down.
> 

Hi Magnus,

I still suspect that trying to do everything with a single RPC is more complex 
than using multiple RPCs.

Can you go into more detail about how the client learns what metrics it should 
send? This was the purpose of the "registration" step in my scheme above.

It seems quite awkward to combine an RPC for reporting metrics with and RPC for 
finding out what metrics are configured to be reported. For example, how would 
you build a tool to check what metrics are configured to be reported? Does the 
tool have to report fake metrics, just because there's no other way to get back 
that information? Seems wrong. (It would be a bit like combining createTopics 
and listTopics for "simplicity")

> > - I don't think the client should be able to choose its own UUID. This
> > adds complexity and introduces a chance that clients will choose an ID that
> > is not unique. We already have an ID that the client itself supplies
> > (clientID) so there is no need to introduce another such ID.
> >
> 
> The CLIENT_INSTANCE_ID (which is a combination of the client.id and a UUID)
> is actually generated by the receiving broker on first contact.
> The need for a new unique semi-random id is outlined in the KIP, but in
> short; the client.id is not unique, and we need something unique that still
> is prefix-matchable to the client.id so that we can add subscriptions
> either using prefix-matching of just the client.id (which may match one or
> more client instances), and exact matching which will match a one specific
> client instance.

Hmm... the client id is already sent in every RPC as part of the header. It's 
not necessary to send it again as part of one of the other RPC fields, right?

More generally, why does the client instance ID need to be prefix-matchable? 
That seems like an implementation detail of the metrics collection system used 
on the broker side. Maybe someone wants to group by things other than client 
IDs -- perhaps client versions, for instance. By the same argument, we should 
put the client version string in the client instance ID, since someone might 
want to group by that. Or maybe we should include the hostname, and the IP, 
and, and, and You see the issue here. I think we shouldn't get involved in 
this kind of decision -- if we just pass a UUID, the broker-side software can 
group it or prefix it however it wants internally.

> > - In general the schema seems to have a bad case of string-itis. UUID,
> > content type, and requested metrics are all strings. Since these messages
> > will be sent very frequently, it's quite costly to use strings for all
> > these things. We have a type for UUID, which uses 16 bytes -- let's use
> > that type for client instance ID, rather than a string which will be much
> > larger. Also, since we already send clientID in the message header, there
> > is no need to include it again in the instance ID.
> >
> 
> As explained above we need the client.id in the CLIENT_INSTANCE_ID. And I
> don't think the overhead of this one string per request is going to be much
> of an issue,
> typical metric push intervals are probably in the >60s range.
> If this becomes a problem we could use a per-connection identifier that the
> broker translates to the client instance id before pushing metrics upwards
> in the system.
> 

This is actually an interesting design question -- why not use a 
per-TCP-connection identifier, rather than a per-client-instance identifier? If 
we are grouping by other things anyway (clientID, principal, etc.) on the 
server side, do we need to maintain a per-process identifier rather than a 
per-connection one?

> 
> > - I think it would also be nice to have an enum or something for
> > AcceptedContentTypes, RequestedMetrics, etc. We know that new additions to
> > these categories will require KIPs, so it should be straightforward for the
> > project to just have an enum that allows us to communicate these as ints.
> >
> 
> I'm thinking this might be overly constraining. The broker doesn't parse or
> handle the received metrics data itself but just pushes it to the metrics
> plugin, using an enum would require a KIP and broker upgrade if the metrics 
> plugin
> supports a newer version of OTLP.
> It is probably better if we don't strictly control the metric format itself.
> 

Unfortunately, we have to strictly control the metrics format, because 
otherwise clients can't implement it. I agree that we don't need to specify how 
the broker-side code works, since that is 

Re: [VOTE] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2021-07-21 Thread Colin McCabe
Agreed. Flexible fields are the way to go here, I think.

Thanks for the discussion, all.

best,
Colin


On Tue, Jul 20, 2021, at 07:18, Ismael Juma wrote:
> Hi Ron,
> 
> Thanks for bringing this up. Thinking about it, the combination of flexible
> fields and the principal type field gives us enough flexibility that we
> don't need a magic number.
> 
> Ismael
> 
> P.S. For a magic number to be useful for third party implementations, we
> would need a mechanism to coordinate what each number means, so it's a bit
> complicated to do it well.
> 
> On Tue, Jul 13, 2021 at 9:48 AM Ron Dagostino  wrote:
> 
> > Hi everyone.  I know it has been 9 months since the last message appeared
> > on this vote thread, but a potential oversight exists in the implementation
> > of DefaultKafkaPrincipalBuilder.KafkaPrincipalSerde from
> > https://github.com/apache/kafka/pull/9103.  Specifically, there is no
> > magic
> > number at the top of the wire format, and this causes obscure parsing
> > errors if incompatible principal implementations are mixed.  A magic number
> > at the top would allow deserialization code to provide an intuitive error
> > message.  The current implementation (without a magic number) was released
> > in 2.8, but it presumably has never been used since forwarding is disabled
> > (
> >
> > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/Kafka.scala#L73
> > ).
> > We would like to change the implementation to include a magic number at the
> > top for the 3.0 release.  This would be a breaking change, but again this
> > presumably has never been used anywhere in production and would therefore
> > break nothing.
> >
> > Note that forwarding is always enabled for KRaft-based clusters, but such
> > clusters are not supported in any production sense and there is no upgrade
> > path from a 2.8 KRaft cluster to a 3.0 KRaft cluster (from
> > config/kraft/README.md: "KRaft mode in Kafka 2.8 is provided for testing
> > only, NOT for production. We do not yet support upgrading existing
> > ZooKeeper-based Kafka clusters into this mode. In fact, when Kafka 3.0 is
> > released, it will not be possible to upgrade your KRaft clusters from 2.8
> > to 3.0").
> >
> > A PR to add the magic number appears at
> > https://github.com/apache/kafka/pull/11038.
> >
> > Please respond to this thread if you have any concerns or objections.
> >
> > Thanks,
> >
> > Ron
> >
> > On Fri, Oct 9, 2020 at 1:21 PM Boyang Chen 
> > wrote:
> >
> > > Thanks Jason for the great thoughts, and we basically decided to shift
> > the
> > > gear for a limited impersonation approach offline.
> > >
> > > The goal here is to simplify the handling logic by relying on the active
> > > controller to do the actual authorization for resources in the original
> > > client request. We are also adding the `KafkaPrincipalSerde` type to
> > > provide the functionality for principal serialization/deserialization so
> > > that it could embed in the Envelope and send to the active controller.
> > > Before 3.0, customized principal builders could optionally extend the
> > serde
> > > type, which is required after 3.0 is released. Either way having the
> > > capability to serde KafkaPrincipal becomes a prerequisite to enable
> > > redirection besides IBP. Additionally, we add a forwardingPrincipal field
> > > to the Authorizer context for authorization and audit logging purposes,
> > > instead of going tagged fields in header.
> > >
> > > The KIP is updated to reflect the current approach, thanks.
> > >
> > >
> > >
> > > On Fri, Sep 25, 2020 at 5:55 PM Jason Gustafson 
> > > wrote:
> > >
> > > > Hey All,
> > > >
> > > > So the main thing the EnvelopeRequest gives us is a way to avoid
> > > converting
> > > > older API versions in order to attach the initial principal name and
> > the
> > > > clientId. It also saves the need to add the initial principal and
> > client
> > > id
> > > > as a tagged field to all of the forwarded protocols, which is nice. We
> > > > still have the challenge of advertising API versions which are
> > compatible
> > > > with both the broker receiving the request and the controller that the
> > > > request is ultimately forwarded to, but not sure I see a way around
> > that.
> > > >
> > > > I realize I might be walking into a minefield here, but since the
> > > envelope
> > > > is being revisited, it seems useful to compare the approach suggested
> > > above
> > > > with the option relying on impersonation. I favor the use of
> > > impersonation
> > > > because it makes forwarding simpler. As the proposal stands, we will
> > have
> > > > to maintain logic for each forwarded API to unpack, authorize, and
> > repack
> > > > any forwarded requests which flow through the broker. This is probably
> > > not
> > > > a huge concern from an efficiency perspective as long as we are talking
> > > > about just the Admin APIs, but it does have a big maintenance cost
> > since
> > > > we'll need to ensure that every new field gets 

Re: [VOTE] KIP-746: Revise KRaft Metadata Records

2021-07-21 Thread Colin McCabe
Hi all,

I made an addendum to this KIP to reflect some of the changes we did recently. 
Specifically, we decided that since we are not supporting KRaft upgrade from 
2.8 to 3.0, we should keep the message versions at 0 and increment the frame 
version instead.

best,
Colin


On Thu, Jun 10, 2021, at 08:38, Colin McCabe wrote:
> Hi all,
> 
> Thanks for the discussion and votes.
> 
> The vote passes with binding +1s from:
> Jason Gustafson
> Jun Rao
> David Arthur
> 
> and a non-binding +1 from
> Israel Ekpo
> 
> best,
> Colin
> 
> 
> On Wed, Jun 9, 2021, at 18:15, David Arthur wrote:
> > Thanks, Colin, looks good to me. +1
> > 
> > On Wed, Jun 9, 2021 at 8:32 PM Israel Ekpo  wrote:
> > 
> > > Makes sense to me
> > >
> > > +1 (non-binding)
> > >
> > >
> > > On Wed, Jun 9, 2021 at 7:05 PM Jun Rao  wrote:
> > >
> > > > Hi, Colin,
> > > >
> > > > Thanks for the KIP. +1 from me.
> > > >
> > > > Jun
> > > >
> > > > On Wed, Jun 9, 2021 at 9:36 AM Jason Gustafson
> > >  > > > >
> > > > wrote:
> > > >
> > > > > +1 Thanks Colin!
> > > > >
> > > > > On Thu, Jun 3, 2021 at 4:30 PM Colin McCabe 
> > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to call a vote for KIP-746: Revise KRaft Metadata Records.
> > > > This
> > > > > > is a minor KIP which revises the KRaft metadata records slightly for
> > > > the
> > > > > > upcoming 3.0 release.
> > > > > >
> > > > > > The KIP is at: https://cwiki.apache.org/confluence/x/34zOCg
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > > >
> > > >
> > >
> > 
> > 
> > -- 
> > David Arthur
> > 
> 


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #351

2021-07-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 486265 lines...]
[2021-07-22T00:37:00.931Z] 
[2021-07-22T00:37:00.931Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsNonExistingGroup() PASSED
[2021-07-22T00:37:00.931Z] 
[2021-07-22T00:37:00.931Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() STARTED
[2021-07-22T00:37:03.954Z] 
[2021-07-22T00:37:03.954Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() PASSED
[2021-07-22T00:37:03.954Z] 
[2021-07-22T00:37:03.954Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() STARTED
[2021-07-22T00:37:07.690Z] 
[2021-07-22T00:37:07.690Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() PASSED
[2021-07-22T00:37:07.690Z] 
[2021-07-22T00:37:07.690Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-07-22T00:37:12.113Z] 
[2021-07-22T00:37:12.113Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-07-22T00:37:12.113Z] 
[2021-07-22T00:37:12.113Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() STARTED
[2021-07-22T00:37:15.412Z] 
[2021-07-22T00:37:15.412Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() PASSED
[2021-07-22T00:37:15.412Z] 
[2021-07-22T00:37:15.412Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() STARTED
[2021-07-22T00:37:19.614Z] 
[2021-07-22T00:37:19.614Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() PASSED
[2021-07-22T00:37:19.614Z] 
[2021-07-22T00:37:19.614Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() STARTED
[2021-07-22T00:37:29.020Z] 
[2021-07-22T00:37:29.020Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() PASSED
[2021-07-22T00:37:29.020Z] 
[2021-07-22T00:37:29.020Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() STARTED
[2021-07-22T00:37:32.735Z] 
[2021-07-22T00:37:32.735Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() PASSED
[2021-07-22T00:37:32.735Z] 
[2021-07-22T00:37:32.735Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() STARTED
[2021-07-22T00:37:39.018Z] 
[2021-07-22T00:37:39.018Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() PASSED
[2021-07-22T00:37:39.018Z] 
[2021-07-22T00:37:39.018Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-07-22T00:37:42.919Z] 
[2021-07-22T00:37:42.919Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-07-22T00:37:42.919Z] 
[2021-07-22T00:37:42.919Z] TopicCommandIntegrationTest > 
testListTopicsWithExcludeInternal() STARTED
[2021-07-22T00:37:47.516Z] 
[2021-07-22T00:37:47.516Z] TopicCommandIntegrationTest > 
testListTopicsWithExcludeInternal() PASSED
[2021-07-22T00:37:47.516Z] 
[2021-07-22T00:37:47.516Z] TopicCommandIntegrationTest > 
testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress() STARTED
[2021-07-22T00:37:51.515Z] 
[2021-07-22T00:37:51.515Z] TopicCommandIntegrationTest > 
testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress() PASSED
[2021-07-22T00:37:51.515Z] 
[2021-07-22T00:37:51.515Z] TopicCommandIntegrationTest > 
testCreateWithNegativePartitionCount() STARTED
[2021-07-22T00:37:56.293Z] 
[2021-07-22T00:37:56.293Z] TopicCommandIntegrationTest > 
testCreateWithNegativePartitionCount() PASSED
[2021-07-22T00:37:56.293Z] 
[2021-07-22T00:37:56.293Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExist() STARTED
[2021-07-22T00:37:59.858Z] 
[2021-07-22T00:37:59.858Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExist() PASSED
[2021-07-22T00:37:59.858Z] 
[2021-07-22T00:37:59.858Z] TopicCommandIntegrationTest > 
testCreateAlterTopicWithRackAware() STARTED
[2021-07-22T00:38:08.657Z] 
[2021-07-22T00:38:08.657Z] TopicCommandIntegrationTest > 
testCreateAlterTopicWithRackAware() PASSED
[2021-07-22T00:38:08.657Z] 
[2021-07-22T00:38:08.657Z] TopicCommandIntegrationTest > 
testListTopicsWithIncludeList() STARTED
[2021-07-22T00:38:14.154Z] 
[2021-07-22T00:38:14.154Z] TopicCommandIntegrationTest > 
testListTopicsWithIncludeList() PASSED
[2021-07-22T00:38:14.154Z] 
[2021-07-22T00:38:14.154Z] TopicCommandIntegrationTest > testTopicDeletion() 
STARTED
[2021-07-22T00:38:19.620Z] 
[2021-07-22T00:38:19.620Z] TopicCommandIntegrationTest > testTopicDeletion() 
PASSED
[2021-07-22T00:38:19.620Z] 
[2021-07-22T00:38:19.620Z] TopicCommandIntegrationTest > 
testCreateWithDefaults() STARTED
[2021-07-22T00:38:23.977Z] 
[2021-07-22T00:38:23.977Z] TopicCommandIntegrationTest > 
testCreateWithDefaults() PASSED
[2021-07-22T00:38:23.977Z] 
[2021-07-22T00:38:23.977Z] 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #56

2021-07-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 420842 lines...]
[2021-07-22T00:32:40.015Z] > Task :connect:file:testClasses
[2021-07-22T00:32:40.340Z] Recording test results
[2021-07-22T00:32:40.637Z] [INFO] Scanning for projects...
[2021-07-22T00:32:41.569Z] [INFO] 
[2021-07-22T00:32:41.569Z] [INFO] --< 
org.apache.maven:standalone-pom >---
[2021-07-22T00:32:41.569Z] [INFO] Building Maven Stub Project (No POM) 1
[2021-07-22T00:32:41.569Z] [INFO] [ pom 
]-
[2021-07-22T00:32:41.569Z] [INFO] 
[2021-07-22T00:32:41.569Z] [INFO] >>> maven-archetype-plugin:3.2.0:generate 
(default-cli) > generate-sources @ standalone-pom >>>
[2021-07-22T00:32:41.569Z] [INFO] 
[2021-07-22T00:32:41.569Z] [INFO] <<< maven-archetype-plugin:3.2.0:generate 
(default-cli) < generate-sources @ standalone-pom <<<
[2021-07-22T00:32:41.569Z] [INFO] 
[2021-07-22T00:32:41.569Z] [INFO] 
[2021-07-22T00:32:41.569Z] [INFO] --- maven-archetype-plugin:3.2.0:generate 
(default-cli) @ standalone-pom ---
[2021-07-22T00:32:41.908Z] > Task :connect:basic-auth-extension:checkstyleTest
[2021-07-22T00:32:41.908Z] > Task :connect:json:testClasses
[2021-07-22T00:32:41.908Z] > Task :connect:file:checkstyleTest
[2021-07-22T00:32:41.908Z] > Task :connect:mirror-client:testClasses
[2021-07-22T00:32:41.908Z] > Task :connect:mirror-client:checkstyleTest
[2021-07-22T00:32:41.908Z] > Task :connect:transforms:testClasses
[2021-07-22T00:32:41.908Z] > Task :connect:json:checkstyleTest
[2021-07-22T00:32:41.908Z] > Task :storage:api:testClasses
[2021-07-22T00:32:41.908Z] > Task :storage:api:checkstyleTest
[2021-07-22T00:32:41.908Z] > Task :streams:examples:testClasses
[2021-07-22T00:32:42.670Z] [INFO] Generating project in Interactive mode
[2021-07-22T00:32:42.670Z] [WARNING] Archetype not found in any catalog. 
Falling back to central repository.
[2021-07-22T00:32:42.670Z] [WARNING] Add a repository with id 'archetype' in 
your settings.xml if archetype's repository is elsewhere.
[2021-07-22T00:32:42.670Z] [INFO] Using property: groupId = streams.examples
[2021-07-22T00:32:42.670Z] [INFO] Using property: artifactId = streams.examples
[2021-07-22T00:32:42.670Z] [INFO] Using property: version = 0.1
[2021-07-22T00:32:42.670Z] [INFO] Using property: package = myapps
[2021-07-22T00:32:42.670Z] Confirm properties configuration:
[2021-07-22T00:32:42.670Z] groupId: streams.examples
[2021-07-22T00:32:42.670Z] artifactId: streams.examples
[2021-07-22T00:32:42.670Z] version: 0.1
[2021-07-22T00:32:42.670Z] package: myapps
[2021-07-22T00:32:42.670Z]  Y: : [INFO] 

[2021-07-22T00:32:42.670Z] [INFO] Using following parameters for creating 
project from Archetype: streams-quickstart-java:3.0.0-SNAPSHOT
[2021-07-22T00:32:42.670Z] [INFO] 

[2021-07-22T00:32:42.670Z] [INFO] Parameter: groupId, Value: streams.examples
[2021-07-22T00:32:42.670Z] [INFO] Parameter: artifactId, Value: streams.examples
[2021-07-22T00:32:42.670Z] [INFO] Parameter: version, Value: 0.1
[2021-07-22T00:32:42.670Z] [INFO] Parameter: package, Value: myapps
[2021-07-22T00:32:42.670Z] [INFO] Parameter: packageInPathFormat, Value: myapps
[2021-07-22T00:32:42.670Z] [INFO] Parameter: package, Value: myapps
[2021-07-22T00:32:42.670Z] [INFO] Parameter: version, Value: 0.1
[2021-07-22T00:32:42.670Z] [INFO] Parameter: groupId, Value: streams.examples
[2021-07-22T00:32:42.670Z] [INFO] Parameter: artifactId, Value: streams.examples
[2021-07-22T00:32:42.670Z] [INFO] Project created from Archetype in dir: 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/streams/quickstart/test-streams-archetype/streams.examples
[2021-07-22T00:32:42.670Z] [INFO] 

[2021-07-22T00:32:42.670Z] [INFO] BUILD SUCCESS
[2021-07-22T00:32:42.670Z] [INFO] 

[2021-07-22T00:32:42.670Z] [INFO] Total time:  1.950 s
[2021-07-22T00:32:42.670Z] [INFO] Finished at: 2021-07-22T00:32:42Z
[2021-07-22T00:32:42.670Z] [INFO] 

Cancelling nested steps due to timeout
[2021-07-22T00:32:43.145Z] Sending interrupt signal to process
[2021-07-22T00:32:44.204Z] > Task :streams:examples:checkstyleTest
[2021-07-22T00:32:44.204Z] > Task :streams:test-utils:testClasses
[2021-07-22T00:32:44.204Z] > Task :connect:api:checkstyleTest
[2021-07-22T00:32:44.204Z] > Task :connect:transforms:checkstyleTest
[2021-07-22T00:32:44.204Z] > Task :streams:test-utils:checkstyleTest
[2021-07-22T00:32:46.178Z] > Task :metadata:checkstyleTest
[2021-07-22T00:32:46.178Z] > Task :streams:streams-scala:compileScala FAILED

[VOTE] KIP-761: Add Total Blocked Time Metric to Streams

2021-07-21 Thread Rohan Desai
Now that the discussion thread's been open for a few days, I'm calling for
a vote on
https://cwiki.apache.org/confluence/display/KAFKA/KIP-761%3A+Add+Total+Blocked+Time+Metric+to+Streams


Re: Request permission for KIP creation

2021-07-21 Thread Haruki Okada
Hi Bill.

Thank you for your quick response!

2021年7月22日(木) 7:40 Bill Bejeck :

> Hi Haruki,
>
> You're all set with the wiki permissions now.
>
> Regards,
> Bill
>
> On Wed, Jul 21, 2021 at 1:43 PM Haruki Okada  wrote:
>
> > Hi, Kafka.
> >
> > I would like to request permission for KIP creation under
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > to propose a config-addition related to
> > https://issues.apache.org/jira/browse/KAFKA-9648 .
> >
> > JIRA id: ocadaruma
> > email: ocadar...@gmail.com
> >
> >
> > Thanks,
> >
> > --
> > 
> > Okada Haruki
> > ocadar...@gmail.com
> > 
> >
>


-- 

Okada Haruki
ocadar...@gmail.com



Re: [VOTE] KIP-763: Range queries with open endpoints

2021-07-21 Thread Matthias J. Sax
Thanks for the KIP.

+1 (binding)


-Matthias

On 7/21/21 1:18 PM, Patrick Stuedi wrote:
> Hi all,
> 
> Thanks for the feedback on the KIP, I have updated the KIP and would like
> to start the voting.
> 
> The KIP can be found here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-763%3A+Range+queries+with+open+endpoints
> 
> Please vote in this thread.
> 
> Thanks!
> -Patrick
> 


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #53

2021-07-21 Thread Apache Jenkins Server
See 




Re: Request permission for KIP creation

2021-07-21 Thread Bill Bejeck
Hi Haruki,

You're all set with the wiki permissions now.

Regards,
Bill

On Wed, Jul 21, 2021 at 1:43 PM Haruki Okada  wrote:

> Hi, Kafka.
>
> I would like to request permission for KIP creation under
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> to propose a config-addition related to
> https://issues.apache.org/jira/browse/KAFKA-9648 .
>
> JIRA id: ocadaruma
> email: ocadar...@gmail.com
>
>
> Thanks,
>
> --
> 
> Okada Haruki
> ocadar...@gmail.com
> 
>


Re: permission request

2021-07-21 Thread Bill Bejeck
Hi Mohamed,

You're all set for Jira now, but I couldn't find you on the wiki.  Can you
confirm the wiki id?

Thanks for your interest in Apache Kafka.

-Bill

On Wed, Jul 21, 2021 at 8:57 AM mohamed amine belkhodja <
belkhodja.am...@gmail.com> wrote:

> Hi,
>
> I'm interested to contribute on the kafka project, here my IDs
>
> wiki: ae84fb5e7a3a25da017ac90bfbce0240
> jira user name: belkhodja.amine
>
> *Bien cordialement / Best regards*
> Mohamed Amine Belkhodja,
> Mobile: +(33) 06.26.61.95.48
> email: belkhodja.am...@gmail.com
>


Jenkins build became unstable: Kafka » Kafka Branch Builder » 3.0 #55

2021-07-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #350

2021-07-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 486474 lines...]
[2021-07-21T22:23:24.883Z] 
[2021-07-21T22:23:24.883Z] See 
https://docs.gradle.org/7.1.1/userguide/command_line_interface.html#sec:command_line_warnings
[2021-07-21T22:23:24.883Z] 
[2021-07-21T22:23:24.883Z] BUILD FAILED in 2h 9m 1s
[2021-07-21T22:23:24.883Z] 199 actionable tasks: 107 executed, 92 up-to-date
[2021-07-21T22:23:24.883Z] 
[2021-07-21T22:23:24.883Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-07-21-20-14-28.html
[2021-07-21T22:23:24.883Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 11 and Scala 2.12
[2021-07-21T22:23:27.577Z] > Task :connect:api:javadoc
[2021-07-21T22:23:27.577Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-07-21T22:23:27.577Z] > Task :connect:api:jar UP-TO-DATE
[2021-07-21T22:23:27.577Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-07-21T22:23:27.577Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-07-21T22:23:27.577Z] > Task :connect:json:jar UP-TO-DATE
[2021-07-21T22:23:27.577Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-07-21T22:23:27.577Z] > Task :connect:api:javadocJar
[2021-07-21T22:23:27.577Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-07-21T22:23:27.577Z] > Task :connect:json:publishToMavenLocal
[2021-07-21T22:23:27.577Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-07-21T22:23:27.577Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-07-21T22:23:27.577Z] > Task :connect:api:testJar
[2021-07-21T22:23:27.577Z] > Task :connect:api:testSrcJar
[2021-07-21T22:23:27.577Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-07-21T22:23:27.577Z] > Task :connect:api:publishToMavenLocal
[2021-07-21T22:23:31.206Z] > Task :streams:javadoc
[2021-07-21T22:23:31.206Z] > Task :streams:javadocJar
[2021-07-21T22:23:32.139Z] > Task :streams:compileTestJava UP-TO-DATE
[2021-07-21T22:23:32.139Z] > Task :streams:testClasses UP-TO-DATE
[2021-07-21T22:23:32.139Z] > Task :streams:testJar
[2021-07-21T22:23:33.072Z] > Task :streams:testSrcJar
[2021-07-21T22:23:33.072Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-07-21T22:23:33.072Z] > Task :streams:publishToMavenLocal
[2021-07-21T22:23:34.005Z] > Task :clients:javadoc
[2021-07-21T22:23:34.939Z] > Task :clients:javadocJar
[2021-07-21T22:23:35.871Z] 
[2021-07-21T22:23:35.871Z] > Task :clients:srcJar
[2021-07-21T22:23:35.871Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-07-21T22:23:35.871Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/clients/src/generated/java'.
 Reason: Task ':clients:srcJar' uses this output of task 
':clients:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.1.1/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-07-21T22:23:36.804Z] 
[2021-07-21T22:23:36.804Z] > Task :clients:testJar
[2021-07-21T22:23:37.802Z] > Task :clients:testSrcJar
[2021-07-21T22:23:37.802Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-07-21T22:23:37.802Z] > Task :clients:publishToMavenLocal
[2021-07-21T22:23:37.802Z] 
[2021-07-21T22:23:37.802Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2021-07-21T22:23:37.802Z] 
[2021-07-21T22:23:37.802Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-07-21T22:23:37.802Z] 
[2021-07-21T22:23:37.802Z] See 
https://docs.gradle.org/7.1.1/userguide/command_line_interface.html#sec:command_line_warnings
[2021-07-21T22:23:37.802Z] 
[2021-07-21T22:23:37.802Z] Execution optimizations have been disabled for 3 
invalid unit(s) of work during this build to ensure correctness.
[2021-07-21T22:23:37.802Z] Please consult deprecation warnings for more details.
[2021-07-21T22:23:37.802Z] 
[2021-07-21T22:23:37.802Z] BUILD SUCCESSFUL in 38s
[2021-07-21T22:23:37.802Z] 77 actionable tasks: 34 executed, 43 up-to-date
[Pipeline] sh
[2021-07-21T22:23:40.447Z] + grep ^version= gradle.properties
[2021-07-21T22:23:40.447Z] + cut -d= -f 2
[Pipeline] dir
[2021-07-21T22:23:41.306Z] Running in 

Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-07-21 Thread Konstantine Karantasis
Thanks for the heads up Colin.

KAFKA-13112 seems important and of course relevant to what we ship with
3.0.
Same for the test failures captured by KAFKA-13095 and KAFKA-12851. Fixing
those will increase the stability of our builds.

Therefore, considering these tickets as blockers currently makes sense to
me.

Konstantine


On Wed, Jul 21, 2021 at 11:46 AM Colin McCabe  wrote:

> Hi Konstantine,
>
> Thanks for your work on this release! We discovered three blocker bugs
> which are worth bringing up here:
>
> 1. KAFKA-13112: Controller's committed offset get out of sync with raft
> client listener context
> 2. KAFKA-13095: TransactionsTest is failing in kraft mode
> 3. KAFKA-12851: Flaky Test
> RaftEventSimulationTest.canMakeProgressIfMajorityIsReachable
>
> There are two subtasks for #1 which we are working on. We suspect that #3
> has been fixed by a previous fix we made... we're looking into it.
>
> best,
> Colin
>
> On Mon, Jul 19, 2021, at 20:23, Konstantine Karantasis wrote:
> > Hi all,
> >
> > Since last week, we have reached the stage of Code Freeze for the 3.0.0
> > Apache Kafka release.
> >
> > From this point forward and until the official release of 3.0.0, only
> > critical fixes for blocker issues should be merged to the 3.0 release
> > branch.
> >
> > The release plan currently includes ten (10) such known blockers.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.0.0
> >
> > Besides these issues, any new issue that potentially gets discovered will
> > need to be reported on dev@kafka.apache.org (under this thread) and be
> > evaluated as a release blocker. At this point, the bar for such issues is
> > high; they need to be regressions or critical issues without an
> acceptable
> > workaround to be considered as release blockers.
> >
> > Exceptions of changes that may be merged to the 3.0 branch without a
> > mention on the dev mailing list are fixes for test failures that will
> help
> > stabilize the build and small documentation changes.
> >
> > If by the end of this week we are down to zero blockers and have green
> > builds and passing system tests, I will attempt to generate the first
> > Release Candidate (RC) for 3.0.0 on Friday.
> >
> > Thank you all for the hard work so far.
> > Konstantine
> >
> >
> > On Mon, Jul 12, 2021 at 1:59 PM Konstantine Karantasis
> >  wrote:
> >
> > > Hi all,
> > >
> > > This is a reminder that Code Freeze for Apache Kafka 3.0 is coming up
> this
> > > week and is set to take place by the end of day Wednesday, July 14th.
> > >
> > > Currently in the project we have 22 blocker issues for 3.0, out of 41
> total
> > > tickets targeting 3.0.
> > >
> > > You may find the list of open issues in the release plan page:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.0.0
> > >
> > > Thanks for all the hard work so far and for reducing the number of open
> > > issues in the recent days.
> > > Please take another look and help us resolve all the blockers for this
> > > upcoming major release.
> > >
> > > Best,
> > > Konstantine
> > >
> > > On Mon, Jul 12, 2021 at 1:57 PM Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > > >
> > > > Thanks for the update Levani,
> > > >
> > > > KIP-708 is now on the list of postponed KIPs.
> > > >
> > > > Konstantine
> > > >
> > > > On Thu, Jul 1, 2021 at 10:48 PM Levani Kokhreidze <
> > > levani.co...@gmail.com>
> > > > wrote:
> > > >
> > > >> Hi Konstantine,
> > > >>
> > > >> FYI, I don’t think we will be able to have KIP-708 ready on time.
> > > >> Feel free to remove it from the release plan.
> > > >>
> > > >> Best,
> > > >> Levani
> > > >>
> > > >> > On 1. Jul 2021, at 01:27, Konstantine Karantasis <
> > > >> konstant...@confluent.io.INVALID> wrote:
> > > >> >
> > > >> > Hi all,
> > > >> >
> > > >> > Today we have reached the Feature Freeze milestone for Apache
> Kafka
> > > 3.0.
> > > >> > Exciting!
> > > >> >
> > > >> > I'm going to allow for any pending changes to settle within the
> next
> > > >> couple
> > > >> > of days.
> > > >> > I trust that we all approve and merge adopted features and changes
> > > >> which we
> > > >> > consider to be in good shape for 3.0.
> > > >> >
> > > >> > Given the 4th of July holiday in the US, the 3.0 release branch
> will
> > > >> appear
> > > >> > sometime on Tuesday, July 6th.
> > > >> > Until then, please keep merging to trunk only the changes you
> intend
> > > to
> > > >> > include in Apache Kafka 3.0.
> > > >> >
> > > >> > Regards,
> > > >> > Konstantine
> > > >> >
> > > >> >
> > > >> > On Wed, Jun 30, 2021 at 3:25 PM Konstantine Karantasis <
> > > >> > konstant...@confluent.io> wrote:
> > > >> >
> > > >> >>
> > > >> >> Done. Thanks Luke!
> > > >> >>
> > > >> >> On Tue, Jun 29, 2021 at 6:39 PM Luke Chen 
> wrote:
> > > >> >>
> > > >> >>> Hi Konstantine,
> > > >> >>> We've decided that the KIP-726 will be released in V3.1, not
> V3.0.
> > > >> >>> KIP-726: Make the "cooperative-sticky, range" as the default
> > > 

[jira] [Created] (KAFKA-13121) Flaky Test TopicBasedRemoteLogMetadataManagerTest.testNewPartitionUpdates()

2021-07-21 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-13121:
--

 Summary: Flaky Test 
TopicBasedRemoteLogMetadataManagerTest.testNewPartitionUpdates()
 Key: KAFKA-13121
 URL: https://issues.apache.org/jira/browse/KAFKA-13121
 Project: Kafka
  Issue Type: Bug
  Components: log
Reporter: A. Sophie Blee-Goldman


h4. Stack Trace
{code:java}
org.apache.kafka.server.log.remote.storage.RemoteResourceNotFoundException: No 
resource found for partition: TopicIdPartition{topicId=2B9rDu44TE6c8pLG8A0RAg, 
topicPartition=new-leader-0}
at 
org.apache.kafka.server.log.remote.metadata.storage.RemotePartitionMetadataStore.getRemoteLogMetadataCache(RemotePartitionMetadataStore.java:112)
 
at 
org.apache.kafka.server.log.remote.metadata.storage.RemotePartitionMetadataStore.listRemoteLogSegments(RemotePartitionMetadataStore.java:98)
at 
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager.listRemoteLogSegments(TopicBasedRemoteLogMetadataManager.java:212)
at 
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerTest.testNewPartitionUpdates(TopicBasedRemoteLogMetadataManagerTest.java:99){code}
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10921/11/testReport/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to stable : Kafka » Kafka Branch Builder » 3.0 #54

2021-07-21 Thread Apache Jenkins Server
See 




[VOTE] KIP-763: Range queries with open endpoints

2021-07-21 Thread Patrick Stuedi
Hi all,

Thanks for the feedback on the KIP, I have updated the KIP and would like
to start the voting.

The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-763%3A+Range+queries+with+open+endpoints

Please vote in this thread.

Thanks!
-Patrick


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #349

2021-07-21 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #52

2021-07-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-07-21 Thread Colin McCabe
Hi Konstantine,

Thanks for your work on this release! We discovered three blocker bugs which 
are worth bringing up here:

1. KAFKA-13112: Controller's committed offset get out of sync with raft client 
listener context
2. KAFKA-13095: TransactionsTest is failing in kraft mode
3. KAFKA-12851: Flaky Test 
RaftEventSimulationTest.canMakeProgressIfMajorityIsReachable

There are two subtasks for #1 which we are working on. We suspect that #3 has 
been fixed by a previous fix we made... we're looking into it.

best,
Colin

On Mon, Jul 19, 2021, at 20:23, Konstantine Karantasis wrote:
> Hi all,
> 
> Since last week, we have reached the stage of Code Freeze for the 3.0.0
> Apache Kafka release.
> 
> From this point forward and until the official release of 3.0.0, only
> critical fixes for blocker issues should be merged to the 3.0 release
> branch.
> 
> The release plan currently includes ten (10) such known blockers.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.0.0
> 
> Besides these issues, any new issue that potentially gets discovered will
> need to be reported on dev@kafka.apache.org (under this thread) and be
> evaluated as a release blocker. At this point, the bar for such issues is
> high; they need to be regressions or critical issues without an acceptable
> workaround to be considered as release blockers.
> 
> Exceptions of changes that may be merged to the 3.0 branch without a
> mention on the dev mailing list are fixes for test failures that will help
> stabilize the build and small documentation changes.
> 
> If by the end of this week we are down to zero blockers and have green
> builds and passing system tests, I will attempt to generate the first
> Release Candidate (RC) for 3.0.0 on Friday.
> 
> Thank you all for the hard work so far.
> Konstantine
> 
> 
> On Mon, Jul 12, 2021 at 1:59 PM Konstantine Karantasis
>  wrote:
> 
> > Hi all,
> >
> > This is a reminder that Code Freeze for Apache Kafka 3.0 is coming up this
> > week and is set to take place by the end of day Wednesday, July 14th.
> >
> > Currently in the project we have 22 blocker issues for 3.0, out of 41 total
> > tickets targeting 3.0.
> >
> > You may find the list of open issues in the release plan page:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.0.0
> >
> > Thanks for all the hard work so far and for reducing the number of open
> > issues in the recent days.
> > Please take another look and help us resolve all the blockers for this
> > upcoming major release.
> >
> > Best,
> > Konstantine
> >
> > On Mon, Jul 12, 2021 at 1:57 PM Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > >
> > > Thanks for the update Levani,
> > >
> > > KIP-708 is now on the list of postponed KIPs.
> > >
> > > Konstantine
> > >
> > > On Thu, Jul 1, 2021 at 10:48 PM Levani Kokhreidze <
> > levani.co...@gmail.com>
> > > wrote:
> > >
> > >> Hi Konstantine,
> > >>
> > >> FYI, I don’t think we will be able to have KIP-708 ready on time.
> > >> Feel free to remove it from the release plan.
> > >>
> > >> Best,
> > >> Levani
> > >>
> > >> > On 1. Jul 2021, at 01:27, Konstantine Karantasis <
> > >> konstant...@confluent.io.INVALID> wrote:
> > >> >
> > >> > Hi all,
> > >> >
> > >> > Today we have reached the Feature Freeze milestone for Apache Kafka
> > 3.0.
> > >> > Exciting!
> > >> >
> > >> > I'm going to allow for any pending changes to settle within the next
> > >> couple
> > >> > of days.
> > >> > I trust that we all approve and merge adopted features and changes
> > >> which we
> > >> > consider to be in good shape for 3.0.
> > >> >
> > >> > Given the 4th of July holiday in the US, the 3.0 release branch will
> > >> appear
> > >> > sometime on Tuesday, July 6th.
> > >> > Until then, please keep merging to trunk only the changes you intend
> > to
> > >> > include in Apache Kafka 3.0.
> > >> >
> > >> > Regards,
> > >> > Konstantine
> > >> >
> > >> >
> > >> > On Wed, Jun 30, 2021 at 3:25 PM Konstantine Karantasis <
> > >> > konstant...@confluent.io> wrote:
> > >> >
> > >> >>
> > >> >> Done. Thanks Luke!
> > >> >>
> > >> >> On Tue, Jun 29, 2021 at 6:39 PM Luke Chen  wrote:
> > >> >>
> > >> >>> Hi Konstantine,
> > >> >>> We've decided that the KIP-726 will be released in V3.1, not V3.0.
> > >> >>> KIP-726: Make the "cooperative-sticky, range" as the default
> > assignor
> > >> >>>
> > >> >>> Could you please remove this KIP from the 3.0 release plan wiki
> > page?
> > >> >>>
> > >> >>> Thank you.
> > >> >>> Luke
> > >> >>>
> > >> >>> On Wed, Jun 30, 2021 at 8:23 AM Konstantine Karantasis
> > >> >>>  wrote:
> > >> >>>
> > >>  Thanks for the update Colin.
> > >>  They are now both in the release plan.
> > >> 
> > >>  Best,
> > >>  Konstantine
> > >> 
> > >>  On Tue, Jun 29, 2021 at 2:55 PM Colin McCabe 
> > >> >>> wrote:
> > >> 
> > >> > Hi Konstantine,
> > >> >
> > >> > Can you please add two KIPs to the 3.0 release plan wiki page?
> > >> >
> > >> 

[jira] [Created] (KAFKA-13120) Flesh out `streams_static_membership_test` to be more robust

2021-07-21 Thread Leah Thomas (Jira)
Leah Thomas created KAFKA-13120:
---

 Summary: Flesh out `streams_static_membership_test` to be more 
robust
 Key: KAFKA-13120
 URL: https://issues.apache.org/jira/browse/KAFKA-13120
 Project: Kafka
  Issue Type: Task
  Components: streams, system tests
Reporter: Leah Thomas


When fixing the `streams_static_membership_test.py` we noticed that the test is 
pretty bare bones, it creates a streams application but doesn't ever send data 
through it or do much with the streams application. We should flesh this out a 
bit to be more realistic. The full java test is in `StaticMembershipTestClient`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13119) Validate the KRaft controllerListener config on startup

2021-07-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-13119.
--
Resolution: Fixed

> Validate the KRaft controllerListener config on startup
> ---
>
> Key: KAFKA-13119
> URL: https://issues.apache.org/jira/browse/KAFKA-13119
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Niket Goel
>Priority: Blocker
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13119) Validate the KRaft controllerListener config on startup

2021-07-21 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-13119:


 Summary: Validate the KRaft controllerListener config on startup
 Key: KAFKA-13119
 URL: https://issues.apache.org/jira/browse/KAFKA-13119
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe
Assignee: Niket Goel
 Fix For: 3.0.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Request permission for KIP creation

2021-07-21 Thread Haruki Okada
Hi, Kafka.

I would like to request permission for KIP creation under
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
to propose a config-addition related to
https://issues.apache.org/jira/browse/KAFKA-9648 .

JIRA id: ocadaruma
email: ocadar...@gmail.com


Thanks,

-- 

Okada Haruki
ocadar...@gmail.com



[jira] [Created] (KAFKA-13118) Backport KAFKA-9887 to 3.0 branch after 3.0.0 release

2021-07-21 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-13118:
-

 Summary: Backport KAFKA-9887 to 3.0 branch after 3.0.0 release
 Key: KAFKA-13118
 URL: https://issues.apache.org/jira/browse/KAFKA-13118
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Affects Versions: 3.0.1
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 3.0.1


We need to backport the fix (commit hash `0314801a8e`) for KAFKA-9887 to the 
`3.0` branch. That fix was merged to `trunk`, `2.8`, and `2.7` _after_ the 3.0 
code freeze, and that issue is not a blocker or regression.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13104) Controller should notify the RaftClient when it resigns

2021-07-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-13104.
--
Resolution: Fixed

Fixed

> Controller should notify the RaftClient when it resigns
> ---
>
> Key: KAFKA-13104
> URL: https://issues.apache.org/jira/browse/KAFKA-13104
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, kraft
>Reporter: Jose Armando Garcia Sancio
>Assignee: Ryan Dielhenn
>Priority: Blocker
>  Labels: kip-500
> Fix For: 3.0.0
>
>
> {code:java}
>   private Throwable handleEventException(String name,
>  Optional 
> startProcessingTimeNs,
>  Throwable exception) {
>   ...
>   renounce();
>   return new UnknownServerException(exception);
>   }
>  {code}
> When the active controller encounters an event exception it attempts to 
> renounce leadership. Unfortunately, this doesn't tell the {{RaftClient}} that 
> it should attempt to give up leadership. This will result in inconsistent 
> state with the {{RaftClient}} as leader but with the controller as inactive.
> We should change this implementation so that the active controller asks the 
> {{RaftClient}} to resign. The active controller waits until 
> {{handleLeaderChange}} before calling {{renounce()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #348

2021-07-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #167

2021-07-21 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-9887 fix failed task or connector count on startup 
failure (#8844)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED


Jenkins build is unstable: Kafka » Kafka Branch Builder » 2.8 #51

2021-07-21 Thread Apache Jenkins Server
See 




subscription

2021-07-21 Thread mohamed amine belkhodja
Hi,

I'm interested to contribute on the kafka project

*Bien cordialement / Best regards*
Mohamed Amine Belkhodja,
Mobile: +(33) 06.26.61.95.48
email: belkhodja.am...@gmail.com


[jira] [Created] (KAFKA-13117) After processors, migrate TupleForwarder and CacheFlushListener

2021-07-21 Thread John Roesler (Jira)
John Roesler created KAFKA-13117:


 Summary: After processors, migrate TupleForwarder and 
CacheFlushListener
 Key: KAFKA-13117
 URL: https://issues.apache.org/jira/browse/KAFKA-13117
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler


Currently, both of these interfaces take plain values in combination with 
timestamps:

CacheFlushListener:
{code:java}
void apply(K key, V newValue, V oldValue, long timestamp)
{code}
TimestampedTupleForwarder
{code:java}
 void maybeForward(K key,
   V newValue,
   V oldValue,
   long timestamp){code}
These are internally translated to the new PAPI, but after the processors are 
migrated, there won't be a need to have this translation. We should update both 
of these APIs to just accept {{Record>}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10546) KIP-478: Deprecate the old PAPI interfaces

2021-07-21 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10546.
--
Resolution: Fixed

> KIP-478: Deprecate the old PAPI interfaces
> --
>
> Key: KAFKA-10546
> URL: https://issues.apache.org/jira/browse/KAFKA-10546
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> Can't be done until after the DSL internals are migrated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #347

2021-07-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13116) Adjust system tests due to KAFKA-12944

2021-07-21 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-13116:
---

 Summary: Adjust system tests due to KAFKA-12944
 Key: KAFKA-13116
 URL: https://issues.apache.org/jira/browse/KAFKA-13116
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 3.0.0


Several system tests involving legacy message formats are failing due to 
KAFKA-12944:

http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2021-07-21--001.system-test-kafka-trunk--1626872410--confluentinc--master–038bdaa4df/report.html

All system tests that write data with legacy message formats need to use IBP 
2.8 or lower.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12418) Make sure it's ok not to include test jars in the release tarball

2021-07-21 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-12418.
-
Resolution: Done

> Make sure it's ok not to include test jars in the release tarball
> -
>
> Key: KAFKA-12418
> URL: https://issues.apache.org/jira/browse/KAFKA-12418
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Blocker
>
> As of [https://github.com/apache/kafka/pull/10203,] the release tarball no 
> longer includes includes test, sources, javadoc and test sources jars. These 
> are still published to the Maven Central repository.
> This seems like a good change and 3.0.0 would be a good time to do it, but 
> filing this JIRA to follow up and make sure before said release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.0 #53

2021-07-21 Thread Apache Jenkins Server
See 




permission request

2021-07-21 Thread mohamed amine belkhodja
Hi,

I'm interested to contribute on the kafka project, here my IDs

wiki: ae84fb5e7a3a25da017ac90bfbce0240
jira user name: belkhodja.amine

*Bien cordialement / Best regards*
Mohamed Amine Belkhodja,
Mobile: +(33) 06.26.61.95.48
email: belkhodja.am...@gmail.com


[jira] [Created] (KAFKA-13115) doSend can be blocking

2021-07-21 Thread Ivan Vaskevych (Jira)
Ivan Vaskevych created KAFKA-13115:
--

 Summary: doSend can be blocking
 Key: KAFKA-13115
 URL: https://issues.apache.org/jira/browse/KAFKA-13115
 Project: Kafka
  Issue Type: Bug
Reporter: Ivan Vaskevych


https://github.com/apache/kafka/pull/11023



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-763: Range queries with open endpoints

2021-07-21 Thread Patrick Stuedi
Thanks John, Bruno for the valuable feedback!

John: I had a quick look at the SessionStore and WindowStore interface.
While it looks like we should be able to apply similar ideas to those
stores, the actual interfaces are slightly different and expanding them for
open ranges may need a bit more thinking. In that sense, and to make sure
we will be converging with this KIP, I'd prefer to not expand the scope of
the KIP . As Bruno suggested we can always propose changes to the
SessionStore and WindowStore in a separate KIP. I'll add a subsection in
the rejected alternatives.

@Bruno: good point, I'll add an example. Yes, all() will be equivalent to
range(null, null) and the implementation of all() will be a call to
range(null, null).

-Patrick

On Wed, Jul 21, 2021 at 9:12 AM Bruno Cadonna  wrote:

> Thanks for the KIP, Patrick!
>
> I agree with John that your KIP is well motivated.
>
> I have just one minor feedback. Could you add store.range(null, null) to
> the example snippets with the comment that this is equivalent to all()
> (it is equivalent, right?)? This question about equivalence between
> range(null, null) and all() came up in my mind when I read the KIP and I
> think I am not the only one.
>
> Regarding expanding the KIP to SessionStore and WindowStore, I think you
> should not expand scope of the KIP. We can do the changes to the
> SessionStore and WindowStore in a separate KIP. But it is your call!
>
>
> Best,
> Bruno
>
> On 21.07.21 00:18, John Roesler wrote:
> > Thanks for the KIP, Patrick!
> >
> > I think your KIP is well motivated. It's definitely a bummer
> > to have to iterate over the full store as a workaround for
> > open-ended ranges.
> >
> > I agree with your pragmatic approach here. We have recently
> > had a couple of other contributions to the store APIs
> > (prefix and reverseRange), and the complexity of adding
> > those new methods far exceeded what anyone expected. I'd be
> > in favor of being conservative with new store APIs until we
> > are able to reign in that complexity somehow. Since your
> > proposed semantics seem reasonable, I'm in favor.
> >
> > While reviewing your KIP, it struck me that we have several
> > range-like APIs on:
> > * SessionStore
> > (
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlySessionStore.java
> )
> > * WindowStore
> > (
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyWindowStore.java
> )
> > as well.
> >
> > Do you think it would be a good idea to also propose a
> > similar change on those APIs? It might be nice (for exactly
> > the same reasons you documented), but it also increases the
> > scope of your work. There is one extra wrinkle I can see:
> > SessionStore has versions of the range methods that take a
> > `long` instead of an `Instant`, so `null` wouldn't work for
> > them.
> >
> > If you prefer to keep your KIP narrow in scope to just the
> > KeyValueStore, I'd also support you. In that case, it might
> > be a good idea to simply mention in the "Rejected
> > Alternatives" that you decided not to address those other
> > store APIs at this time. That way, people later on won't
> > have to wonder why those other methods didn't get updated.
> >
> > Other than that, I have no concerns!
> >
> > Thank you,
> > John
> >
> > On Mon, 2021-07-19 at 13:22 +0200, Patrick Stuedi wrote:
> >> Hi everyone,
> >>
> >> I would like to start the discussion for KIP-763: Range queries with
> open
> >> endpoints.
> >>
> >> The KIP can be found here:
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-763%3A+Range+queries+with+open+endpoints
> >>
> >> Any feedback will be highly appreciated.
> >>
> >> Many thanks,
> >>   Patrick
> >
> >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #346

2021-07-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 486136 lines...]
[2021-07-21T09:10:27.032Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformValuesWithValueTransformerWithKey PASSED
[2021-07-21T09:10:27.032Z] 
[2021-07-21T09:10:27.032Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformValuesWithValueTransformerWithKeyWithConnectedStoreProvider 
STARTED
[2021-07-21T09:10:28.058Z] 
[2021-07-21T09:10:28.058Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformValuesWithValueTransformerWithKeyWithConnectedStoreProvider 
PASSED
[2021-07-21T09:10:28.058Z] 
[2021-07-21T09:10:28.058Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformWithConnectedStoreProvider STARTED
[2021-07-21T09:10:32.635Z] 
[2021-07-21T09:10:32.635Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformWithConnectedStoreProvider PASSED
[2021-07-21T09:10:32.635Z] 
[2021-07-21T09:10:32.635Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldFlatTransformWithConnectedStoreProvider STARTED
[2021-07-21T09:10:35.872Z] 
[2021-07-21T09:10:35.872Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldFlatTransformWithConnectedStoreProvider PASSED
[2021-07-21T09:10:35.872Z] 
[2021-07-21T09:10:35.872Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldFlatTransformValuesWithValueTransformerWithoutKeyWithConnectedStoreProvider
 STARTED
[2021-07-21T09:10:40.918Z] 
[2021-07-21T09:10:40.918Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldFlatTransformValuesWithValueTransformerWithoutKeyWithConnectedStoreProvider
 PASSED
[2021-07-21T09:10:40.918Z] 
[2021-07-21T09:10:40.918Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformValuesWithValueTransformerWithoutKeyWithConnectedStoreProvider 
STARTED
[2021-07-21T09:10:45.361Z] 
[2021-07-21T09:10:45.361Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformValuesWithValueTransformerWithoutKeyWithConnectedStoreProvider 
PASSED
[2021-07-21T09:10:45.361Z] 
[2021-07-21T09:10:45.362Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformValuesWithValueTransformerWithoutKey STARTED
[2021-07-21T09:10:49.579Z] 
[2021-07-21T09:10:49.579Z] 
org.apache.kafka.streams.integration.KStreamTransformIntegrationTest > 
shouldTransformValuesWithValueTransformerWithoutKey PASSED
[2021-07-21T09:10:50.781Z] 
[2021-07-21T09:10:50.781Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[at_least_once] STARTED
[2021-07-21T09:12:02.687Z] 
[2021-07-21T09:12:02.687Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[at_least_once] PASSED
[2021-07-21T09:12:02.687Z] 
[2021-07-21T09:12:02.687Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once] STARTED
[2021-07-21T09:13:04.238Z] 
[2021-07-21T09:13:04.238Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once] PASSED
[2021-07-21T09:13:04.238Z] 
[2021-07-21T09:13:04.238Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once_v2] STARTED
[2021-07-21T09:14:32.245Z] Cannot contact H48: java.lang.InterruptedException
[2021-07-21T09:14:38.535Z] 
[2021-07-21T09:14:38.535Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once_v2] PASSED
[2021-07-21T09:14:38.535Z] 
[2021-07-21T09:14:38.535Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] STARTED
[2021-07-21T09:14:52.543Z] 
[2021-07-21T09:14:52.543Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] PASSED
[2021-07-21T09:14:52.543Z] 
[2021-07-21T09:14:52.543Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED
[2021-07-21T09:15:04.370Z] 
[2021-07-21T09:15:04.370Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] PASSED
[2021-07-21T09:15:04.370Z] 
[2021-07-21T09:15:04.370Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED
[2021-07-21T09:15:16.353Z] 
[2021-07-21T09:15:16.353Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] PASSED
[2021-07-21T09:15:16.353Z] 

Jenkins build became unstable: Kafka » Kafka Branch Builder » 3.0 #52

2021-07-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-763: Range queries with open endpoints

2021-07-21 Thread Bruno Cadonna

Thanks for the KIP, Patrick!

I agree with John that your KIP is well motivated.

I have just one minor feedback. Could you add store.range(null, null) to 
the example snippets with the comment that this is equivalent to all() 
(it is equivalent, right?)? This question about equivalence between 
range(null, null) and all() came up in my mind when I read the KIP and I 
think I am not the only one.


Regarding expanding the KIP to SessionStore and WindowStore, I think you 
should not expand scope of the KIP. We can do the changes to the 
SessionStore and WindowStore in a separate KIP. But it is your call!



Best,
Bruno

On 21.07.21 00:18, John Roesler wrote:

Thanks for the KIP, Patrick!

I think your KIP is well motivated. It's definitely a bummer
to have to iterate over the full store as a workaround for
open-ended ranges.

I agree with your pragmatic approach here. We have recently
had a couple of other contributions to the store APIs
(prefix and reverseRange), and the complexity of adding
those new methods far exceeded what anyone expected. I'd be
in favor of being conservative with new store APIs until we
are able to reign in that complexity somehow. Since your
proposed semantics seem reasonable, I'm in favor.

While reviewing your KIP, it struck me that we have several
range-like APIs on:
* SessionStore
(https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlySessionStore.java)
* WindowStore
(https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyWindowStore.java)
as well.

Do you think it would be a good idea to also propose a
similar change on those APIs? It might be nice (for exactly
the same reasons you documented), but it also increases the
scope of your work. There is one extra wrinkle I can see:
SessionStore has versions of the range methods that take a
`long` instead of an `Instant`, so `null` wouldn't work for
them.

If you prefer to keep your KIP narrow in scope to just the
KeyValueStore, I'd also support you. In that case, it might
be a good idea to simply mention in the "Rejected
Alternatives" that you decided not to address those other
store APIs at this time. That way, people later on won't
have to wonder why those other methods didn't get updated.

Other than that, I have no concerns!

Thank you,
John

On Mon, 2021-07-19 at 13:22 +0200, Patrick Stuedi wrote:

Hi everyone,

I would like to start the discussion for KIP-763: Range queries with open
endpoints.

The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-763%3A+Range+queries+with+open+endpoints

Any feedback will be highly appreciated.

Many thanks,
  Patrick





Re: [DISCUSS] KIP-761: Add total blocked time metric to streams

2021-07-21 Thread Rohan Desai
> I remember now that we moved the round-trip PID's txn completion logic
into
init-transaction and commit/abort-transaction. So I think we'd count time
as in StreamsProducer#initTransaction as well (admittedly it is in most
cases a one-time thing).

Makes sense - I'll update the KIP

On Tue, Jul 20, 2021 at 11:48 PM Rohan Desai 
wrote:

>
> > I had a question - it seems like from the descriptionsof
> `txn-commit-time-total` and `offset-commit-time-total` that they measure
> similar processes for ALOS and EOS, but only `txn-commit-time-total` is
> included in `blocked-time-total`. Why isn't `offset-commit-time-total` also
> included?
>
> I've updated the KIP to include it.
>
> > Aside from `flush-time-total`, `txn-commit-time-total` and
> `offset-commit-time-total`, which will be producer/consumer client metrics,
> the rest of the metrics will be streams metrics that will be thread level,
> is that right?
>
> Based on the feedback from Guozhang, I've updated the KIP to reflect that
> the lower-level metrics are all client metrics that are then summed to
> compute the blocked time metric, which is a Streams metric.
>
> On Tue, Jul 20, 2021 at 11:58 AM Rohan Desai 
> wrote:
>
>> > Similarly, I think "txn-commit-time-total" and
>> "offset-commit-time-total" may better be inside producer and consumer
>> clients respectively.
>>
>> I agree for offset-commit-time-total. For txn-commit-time-total I'm
>> proposing we measure `StreamsProducer.commitTransaction`, which wraps
>> multiple producer calls (sendOffsets, commitTransaction)
>>
>> > > For "txn-commit-time-total" specifically, besides producer.commitTxn.
>> other txn-related calls may also be blocking, including
>> producer.beginTxn/abortTxn, I saw you mentioned "txn-begin-time-total"
>> later in the doc, but did not include it as a separate metric, and
>> similarly, should we have a `txn-abort-time-total` as well? If yes, could
>> you update the KIP page accordingly.
>>
>> `beginTransaction` is not blocking - I meant to remove that from that
>> doc. I'll add something for abort.
>>
>> On Mon, Jul 19, 2021 at 11:55 PM Rohan Desai 
>> wrote:
>>
>>> Thanks for the review Guozhang! responding to your feedback inline:
>>>
>>> > 1) I agree that the current ratio metrics is just "snapshot in
>>> point", and
>>> more flexible metrics that would allow reporters to calculate based on
>>> window intervals are better. However, the current mechanism of the
>>> proposed
>>> metrics assumes the thread->clients mapping as of today, where each
>>> thread
>>> would own exclusively one main consumer, restore consumer, producer and
>>> an
>>> admin client. But this mapping may be subject to change in the future.
>>> Have
>>> you thought about how this metric can be extended when, e.g. the embedded
>>> clients and stream threads are de-coupled?
>>>
>>> Of course this depends on how exactly we refactor the runtime - assuming
>>> that we plan to factor out consumers into an "I/O" layer that is
>>> responsible for receiving records and enqueuing them to be processed by
>>> processing threads, then I think it should be reasonable to count the time
>>> we spend blocked on this internal queue(s) as blocked. The main concern
>>> there to me is that the I/O layer would be doing something expensive like
>>> decompression that shouldn't be counted as "blocked". But if that really is
>>> so expensive that it starts to throw off our ratios then it's probably
>>> indicative of a larger problem that the "i/o layer" is a bottleneck and it
>>> would be worth refactoring so that decompression (or insert other expensive
>>> thing here) can also be done on the processing threads.
>>>
>>> > 2) [This and all below are minor comments] The "flush-time-total" may
>>> better be a producer client metric, as "flush-wait-time-total", than a
>>> streams metric, though the streams-level "total-blocked" can still
>>> leverage
>>> it. Similarly, I think "txn-commit-time-total" and
>>> "offset-commit-time-total" may better be inside producer and consumer
>>> clients respectively.
>>>
>>> Good call - I'll update the KIP
>>>
>>> > 3) The doc was not very clear on how "thread-start-time" would be
>>> needed
>>> when calculating streams utilization along with total-blocked time, could
>>> you elaborate a bit more in the KIP?
>>>
>>> Yes, will do.
>>>
>>> > For "txn-commit-time-total" specifically, besides producer.commitTxn.
>>> other txn-related calls may also be blocking, including
>>> producer.beginTxn/abortTxn, I saw you mentioned "txn-begin-time-total"
>>> later in the doc, but did not include it as a separate metric, and
>>> similarly, should we have a `txn-abort-time-total` as well? If yes,
>>> could
>>> you update the KIP page accordingly.
>>>
>>> Ack.
>>>
>>> On Mon, Jul 12, 2021 at 11:29 PM Rohan Desai 
>>> wrote:
>>>
 Hello All,

 I'd like to start a discussion on the KIP linked above which proposes
 some metrics that we would find useful to help measure whether a Kafka
 Streams 

Re: [DISCUSS] KIP-761: Add total blocked time metric to streams

2021-07-21 Thread Rohan Desai
> I had a question - it seems like from the descriptionsof
`txn-commit-time-total` and `offset-commit-time-total` that they measure
similar processes for ALOS and EOS, but only `txn-commit-time-total` is
included in `blocked-time-total`. Why isn't `offset-commit-time-total` also
included?

I've updated the KIP to include it.

> Aside from `flush-time-total`, `txn-commit-time-total` and
`offset-commit-time-total`, which will be producer/consumer client metrics,
the rest of the metrics will be streams metrics that will be thread level,
is that right?

Based on the feedback from Guozhang, I've updated the KIP to reflect that
the lower-level metrics are all client metrics that are then summed to
compute the blocked time metric, which is a Streams metric.

On Tue, Jul 20, 2021 at 11:58 AM Rohan Desai 
wrote:

> > Similarly, I think "txn-commit-time-total" and
> "offset-commit-time-total" may better be inside producer and consumer
> clients respectively.
>
> I agree for offset-commit-time-total. For txn-commit-time-total I'm
> proposing we measure `StreamsProducer.commitTransaction`, which wraps
> multiple producer calls (sendOffsets, commitTransaction)
>
> > > For "txn-commit-time-total" specifically, besides producer.commitTxn.
> other txn-related calls may also be blocking, including
> producer.beginTxn/abortTxn, I saw you mentioned "txn-begin-time-total"
> later in the doc, but did not include it as a separate metric, and
> similarly, should we have a `txn-abort-time-total` as well? If yes, could
> you update the KIP page accordingly.
>
> `beginTransaction` is not blocking - I meant to remove that from that doc.
> I'll add something for abort.
>
> On Mon, Jul 19, 2021 at 11:55 PM Rohan Desai 
> wrote:
>
>> Thanks for the review Guozhang! responding to your feedback inline:
>>
>> > 1) I agree that the current ratio metrics is just "snapshot in point",
>> and
>> more flexible metrics that would allow reporters to calculate based on
>> window intervals are better. However, the current mechanism of the
>> proposed
>> metrics assumes the thread->clients mapping as of today, where each thread
>> would own exclusively one main consumer, restore consumer, producer and an
>> admin client. But this mapping may be subject to change in the future.
>> Have
>> you thought about how this metric can be extended when, e.g. the embedded
>> clients and stream threads are de-coupled?
>>
>> Of course this depends on how exactly we refactor the runtime - assuming
>> that we plan to factor out consumers into an "I/O" layer that is
>> responsible for receiving records and enqueuing them to be processed by
>> processing threads, then I think it should be reasonable to count the time
>> we spend blocked on this internal queue(s) as blocked. The main concern
>> there to me is that the I/O layer would be doing something expensive like
>> decompression that shouldn't be counted as "blocked". But if that really is
>> so expensive that it starts to throw off our ratios then it's probably
>> indicative of a larger problem that the "i/o layer" is a bottleneck and it
>> would be worth refactoring so that decompression (or insert other expensive
>> thing here) can also be done on the processing threads.
>>
>> > 2) [This and all below are minor comments] The "flush-time-total" may
>> better be a producer client metric, as "flush-wait-time-total", than a
>> streams metric, though the streams-level "total-blocked" can still
>> leverage
>> it. Similarly, I think "txn-commit-time-total" and
>> "offset-commit-time-total" may better be inside producer and consumer
>> clients respectively.
>>
>> Good call - I'll update the KIP
>>
>> > 3) The doc was not very clear on how "thread-start-time" would be
>> needed
>> when calculating streams utilization along with total-blocked time, could
>> you elaborate a bit more in the KIP?
>>
>> Yes, will do.
>>
>> > For "txn-commit-time-total" specifically, besides producer.commitTxn.
>> other txn-related calls may also be blocking, including
>> producer.beginTxn/abortTxn, I saw you mentioned "txn-begin-time-total"
>> later in the doc, but did not include it as a separate metric, and
>> similarly, should we have a `txn-abort-time-total` as well? If yes, could
>> you update the KIP page accordingly.
>>
>> Ack.
>>
>> On Mon, Jul 12, 2021 at 11:29 PM Rohan Desai 
>> wrote:
>>
>>> Hello All,
>>>
>>> I'd like to start a discussion on the KIP linked above which proposes
>>> some metrics that we would find useful to help measure whether a Kafka
>>> Streams application is saturated. The motivation section in the KIP goes
>>> into some more detail on why we think this is a useful addition to the
>>> metrics already implemented. Thanks in advance for your feedback!
>>>
>>> Best Regards,
>>>
>>> Rohan
>>>
>>> On Mon, Jul 12, 2021 at 12:00 PM Rohan Desai 
>>> wrote:
>>>

 https://cwiki.apache.org/confluence/display/KAFKA/KIP-761%3A+Add+Total+Blocked+Time+Metric+to+Streams

>>>