[GitHub] [kafka-site] miguno commented on pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-01 Thread GitBox


miguno commented on pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#issuecomment-788696584


   @bbejeck wrote in 
https://github.com/apache/kafka-site/pull/334#pullrequestreview-600990037:
   > Also, @miguno, can you create an identical PR to go against docs in AK 
trunk?
   
   Yes, I will do this once the content review of this PR is completed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12399) Add log4j2 Appender

2021-03-01 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12399:
---

 Summary: Add log4j2 Appender
 Key: KAFKA-12399
 URL: https://issues.apache.org/jira/browse/KAFKA-12399
 Project: Kafka
  Issue Type: Improvement
  Components: logging
Reporter: Dongjin Lee
Assignee: Dongjin Lee


As a following job of KAFKA-9366, we have to provide a log4j2 counterpart to 
log4j-appender.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12398) Fix flaky test `ConsumerBounceTest.testClose`

2021-03-01 Thread dengziming (Jira)
dengziming created KAFKA-12398:
--

 Summary: Fix flaky test `ConsumerBounceTest.testClose`
 Key: KAFKA-12398
 URL: https://issues.apache.org/jira/browse/KAFKA-12398
 Project: Kafka
  Issue Type: Improvement
Reporter: dengziming
Assignee: dengziming
 Attachments: image-2021-03-02-14-22-34-367.png

Sometimes it failed with the following error:

!image-2021-03-02-14-22-34-367.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #100

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10340: Proactively close producer when cancelling source 
tasks (#10016)


--
[...truncated 6.35 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutput

[jira] [Created] (KAFKA-12397) Error:KeeperErrorCode = BadVersion for /brokers/topics

2021-03-01 Thread Sachin (Jira)
Sachin created KAFKA-12397:
--

 Summary: Error:KeeperErrorCode = BadVersion for /brokers/topics
 Key: KAFKA-12397
 URL: https://issues.apache.org/jira/browse/KAFKA-12397
 Project: Kafka
  Issue Type: Bug
Reporter: Sachin


We are running Zookeeper and kafka on DCOS. Currently observing below errors in 
zookeeper logs:

 
NFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@892] - Processing 
mntr command from /100.77.120.5:47676 2021-03-01 06:48:28,223 [myid:2] - INFO 
[Thread-7313:NIOServerCnxn@1040] - Closed socket connection for client 
/100.77.120.5:47676 (no session established for client) 2021-03-01 06:48:31,745 
[myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@653] - Got 
user-level KeeperException when processing sessionid:0x208f3e3caa5 
type:setData cxid:0x11772f zxid:0x571be5 txntype:-1 reqpath:n/a Error 
Path:/brokers/topics/connect-offsets/partitions/8/state Error:KeeperErrorCode = 
BadVersion for /brokers/topics/connect-offsets/partitions/8/state 2021-03-01 
06:48:31,756 [myid:2] - INFO [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
processing sessionid:0x208f3e3caa5 type:setData cxid:0x117731 
zxid:0x571be6 txntype:-1 reqpath:n/a Error 
Path:/brokers/topics/connect-offsets/partitions/20/state Error:KeeperErrorCode 
= BadVersion for /brokers/topics/connect-offsets/partitions/20/state 2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-715: Expose Committed offset in streams

2021-03-01 Thread Matthias J. Sax
> but the user should
> not rely on all tasks being returned at any given time to begin with since
> it's possible we are in between revoking and re-assigning a partition.

Exactly. That is what I meant: the "hand off" phase of partitions during
a rebalance. During this phase, some tasks are "missing" if you
aggregate the information globally. My point was (even if it might be
obvious to us) that it seems to be worth pointing out in the KIPs and in
the docs.

I meant "partial information" from a global POV (not partial for a
single local instance).

> Also I mention that they return the highest value they had seen
> so far for any tasks they have assigned to them.

For the shutdown case maybe, but after a task is closed its metadata
should not be returned any longer IMHO.


-Matthias

On 3/1/21 4:46 PM, Walker Carlson wrote:
> I updated to use Optional, good idea Mathias.
> 
> For the localThreadMetadata, it could already be called running a
> rebalance. Also I mention that they return the highest value they had seen
> so far for any tasks they have assigned to them. I thought it would be
> useful to see the TaskMetadata while the Threads were shutting down. I
> think that there shouldn't really be partial information. If you think this
> should be clarified better let me know.
> 
> walker
> 
> On Mon, Mar 1, 2021 at 3:45 PM Sophie Blee-Goldman 
> wrote:
> 
>> Can you clarify your second question Matthias? If this is queried during
>> a cooperative rebalance, it should return the tasks as usual. If the user
>> is
>> using eager rebalancing then this will not return any tasks, but the user
>> should
>> not rely on all tasks being returned at any given time to begin with since
>> it's
>> possible we are in between revoking and re-assigning a partition.
>>
>> What does "partial information" mean?
>>
>> (btw I agree that an Optional makes sense for timeCurrentIdlingStarted())
>>
>> On Mon, Mar 1, 2021 at 11:46 AM Matthias J. Sax  wrote:
>>
>>> Thanks the updating the KIP Walker.
>>>
>>> About, `timeCurrentIdlingStarted()`: should we return an `Optional`
>>> instead of `-1` if a task is not idling.
>>>
>>>
>>> As we allow to call `localThreadMetadata()` any time, could it be that
>>> we report partial information during a rebalance? If yes, this should be
>>> pointed out, because if one want to implement a health check this needs
>>> to be taken into account.
>>>
>>> -Matthias
>>>
>>>
>>> On 2/27/21 11:32 AM, Walker Carlson wrote:
 Sure thing Boyang,

 1) it is in proposed changes. I expanded on it a bit more now.
 2) done
 3) and done :)

 thanks for the suggestions,
 walker

 On Fri, Feb 26, 2021 at 3:10 PM Boyang Chen <
>> reluctanthero...@gmail.com>
 wrote:

> Thanks Walker. Some minor comments:
>
> 1. Could you add a reference to localThreadMetadata method in the KIP?
> 2. Could you make the code block as a java template, such that
> TaskMetadata.java could be as the template title? Also it would be
>> good
>>> to
> add some meta comments about the newly added functions.
> 3. Could you write more details about rejected alternatives? Just as
>>> why we
> don't choose to expose as metrics, and how a new method on KStream is
>>> not
> favorable. These would be valuable when we look back on our design
> decisions.
>
> On Fri, Feb 26, 2021 at 11:23 AM Walker Carlson <
>> wcarl...@confluent.io>
> wrote:
>
>> I understand now. I think that is a valid concern but I think it is
>>> best
>> solved but having an external service verify through streams. As this
>>> KIP
>> is now just adding fields to TaskMetadata to be returned in the
>> threadMetadata I am going to say that is out of scope.
>>
>> That seems to be the last concern. If there are no others I will put
>>> this
>> up for a vote soon.
>>
>> walker
>>
>> On Thu, Feb 25, 2021 at 12:35 PM Boyang Chen <
>>> reluctanthero...@gmail.com
>>
>> wrote:
>>
>>> For the 3rd point, yes, what I'm proposing is an edge case. For
> example,
>>> when we have 4 tasks [0_0, 0_1, 1_0, 1_1], and a bug in rebalancing
> logic
>>> causing no one gets 1_1 assigned. Then the health check service will
> only
>>> see 3 tasks [0_0, 0_1, 1_0] reporting progress normally while not
> paying
>>> attention to 1_1. What I want to expose is a "logical global" view
>> of
> all
>>> the tasks through the stream instance, since each instance gets the
>>> assigned topology and should be able to infer all the exact tasks to
>>> be
>> up
>>> and running when the service is healthy.
>>>
>>> On Thu, Feb 25, 2021 at 11:25 AM Walker Carlson <
>>> wcarl...@confluent.io
>>
>>> wrote:
>>>
 Thanks for the follow up Boyang and Guozhang,

 I have updated the kip to include these ideas.

 Guozhang, that is a good idea about using the TaskMe

Re: [DISCUSS] KIP-715: Expose Committed offset in streams

2021-03-01 Thread Walker Carlson
I updated to use Optional, good idea Mathias.

For the localThreadMetadata, it could already be called running a
rebalance. Also I mention that they return the highest value they had seen
so far for any tasks they have assigned to them. I thought it would be
useful to see the TaskMetadata while the Threads were shutting down. I
think that there shouldn't really be partial information. If you think this
should be clarified better let me know.

walker

On Mon, Mar 1, 2021 at 3:45 PM Sophie Blee-Goldman 
wrote:

> Can you clarify your second question Matthias? If this is queried during
> a cooperative rebalance, it should return the tasks as usual. If the user
> is
> using eager rebalancing then this will not return any tasks, but the user
> should
> not rely on all tasks being returned at any given time to begin with since
> it's
> possible we are in between revoking and re-assigning a partition.
>
> What does "partial information" mean?
>
> (btw I agree that an Optional makes sense for timeCurrentIdlingStarted())
>
> On Mon, Mar 1, 2021 at 11:46 AM Matthias J. Sax  wrote:
>
> > Thanks the updating the KIP Walker.
> >
> > About, `timeCurrentIdlingStarted()`: should we return an `Optional`
> > instead of `-1` if a task is not idling.
> >
> >
> > As we allow to call `localThreadMetadata()` any time, could it be that
> > we report partial information during a rebalance? If yes, this should be
> > pointed out, because if one want to implement a health check this needs
> > to be taken into account.
> >
> > -Matthias
> >
> >
> > On 2/27/21 11:32 AM, Walker Carlson wrote:
> > > Sure thing Boyang,
> > >
> > > 1) it is in proposed changes. I expanded on it a bit more now.
> > > 2) done
> > > 3) and done :)
> > >
> > > thanks for the suggestions,
> > > walker
> > >
> > > On Fri, Feb 26, 2021 at 3:10 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > > wrote:
> > >
> > >> Thanks Walker. Some minor comments:
> > >>
> > >> 1. Could you add a reference to localThreadMetadata method in the KIP?
> > >> 2. Could you make the code block as a java template, such that
> > >> TaskMetadata.java could be as the template title? Also it would be
> good
> > to
> > >> add some meta comments about the newly added functions.
> > >> 3. Could you write more details about rejected alternatives? Just as
> > why we
> > >> don't choose to expose as metrics, and how a new method on KStream is
> > not
> > >> favorable. These would be valuable when we look back on our design
> > >> decisions.
> > >>
> > >> On Fri, Feb 26, 2021 at 11:23 AM Walker Carlson <
> wcarl...@confluent.io>
> > >> wrote:
> > >>
> > >>> I understand now. I think that is a valid concern but I think it is
> > best
> > >>> solved but having an external service verify through streams. As this
> > KIP
> > >>> is now just adding fields to TaskMetadata to be returned in the
> > >>> threadMetadata I am going to say that is out of scope.
> > >>>
> > >>> That seems to be the last concern. If there are no others I will put
> > this
> > >>> up for a vote soon.
> > >>>
> > >>> walker
> > >>>
> > >>> On Thu, Feb 25, 2021 at 12:35 PM Boyang Chen <
> > reluctanthero...@gmail.com
> > >>>
> > >>> wrote:
> > >>>
> >  For the 3rd point, yes, what I'm proposing is an edge case. For
> > >> example,
> >  when we have 4 tasks [0_0, 0_1, 1_0, 1_1], and a bug in rebalancing
> > >> logic
> >  causing no one gets 1_1 assigned. Then the health check service will
> > >> only
> >  see 3 tasks [0_0, 0_1, 1_0] reporting progress normally while not
> > >> paying
> >  attention to 1_1. What I want to expose is a "logical global" view
> of
> > >> all
> >  the tasks through the stream instance, since each instance gets the
> >  assigned topology and should be able to infer all the exact tasks to
> > be
> > >>> up
> >  and running when the service is healthy.
> > 
> >  On Thu, Feb 25, 2021 at 11:25 AM Walker Carlson <
> > wcarl...@confluent.io
> > >>>
> >  wrote:
> > 
> > > Thanks for the follow up Boyang and Guozhang,
> > >
> > > I have updated the kip to include these ideas.
> > >
> > > Guozhang, that is a good idea about using the TaskMetadata. We can
> > >> get
> > >>> it
> > > through the ThreadMetadata with a minor change to
> > >> `localThreadMetadata`
> >  in
> > > kafkaStreams. This means that we will only need to update
> > >> TaskMetadata
> >  and
> > > add no other APIs
> > >
> > > Boyang, since each TaskMetadata contains the TaskId and
> > >>> TopicPartitions I
> > > don't believe mapping either way will be a problem. Also I think we
> > >> can
> >  do
> > > something like record the time the task started idling and when it
> > >>> stops
> > > idling we can override it to -1. I think that should clear up the
> > >> first
> >  two
> > > points.
> > >
> > > As for your third point I am not sure I 100% understand. The
> >  ThreadMetadata
> > > will contain a set of al

Re: [VOTE] KIP-715: Expose Committed offset in streams

2021-03-01 Thread Guozhang Wang
Thanks Walker for the updated KIP, +1 (binding)


Guozhang

On Mon, Mar 1, 2021 at 3:47 PM Sophie Blee-Goldman 
wrote:

> Thanks for the KIP! +1 (binding)
>
> Sophie
>
> On Mon, Mar 1, 2021 at 10:04 AM Leah Thomas  wrote:
>
> > Hey Walker,
> >
> > Thanks for leading this discussion. +1 from me, non-binding
> >
> > Leah
> >
> > On Mon, Mar 1, 2021 at 12:37 AM Boyang Chen 
> > wrote:
> >
> > > Thanks Walker for the proposal, +1 (binding) from me.
> > >
> > > On Fri, Feb 26, 2021 at 12:42 PM Walker Carlson  >
> > > wrote:
> > >
> > > > Hello all,
> > > >
> > > > I would like to bring KIP-715 to a vote. Here is the KIP:
> > > > https://cwiki.apache.org/confluence/x/aRRRCg.
> > > >
> > > > Walker
> > > >
> > >
> >
>


-- 
-- Guozhang


Re: [VOTE] KIP-715: Expose Committed offset in streams

2021-03-01 Thread Sophie Blee-Goldman
Thanks for the KIP! +1 (binding)

Sophie

On Mon, Mar 1, 2021 at 10:04 AM Leah Thomas  wrote:

> Hey Walker,
>
> Thanks for leading this discussion. +1 from me, non-binding
>
> Leah
>
> On Mon, Mar 1, 2021 at 12:37 AM Boyang Chen 
> wrote:
>
> > Thanks Walker for the proposal, +1 (binding) from me.
> >
> > On Fri, Feb 26, 2021 at 12:42 PM Walker Carlson 
> > wrote:
> >
> > > Hello all,
> > >
> > > I would like to bring KIP-715 to a vote. Here is the KIP:
> > > https://cwiki.apache.org/confluence/x/aRRRCg.
> > >
> > > Walker
> > >
> >
>


Re: [DISCUSS] KIP-715: Expose Committed offset in streams

2021-03-01 Thread Sophie Blee-Goldman
Can you clarify your second question Matthias? If this is queried during
a cooperative rebalance, it should return the tasks as usual. If the user is
using eager rebalancing then this will not return any tasks, but the user
should
not rely on all tasks being returned at any given time to begin with since
it's
possible we are in between revoking and re-assigning a partition.

What does "partial information" mean?

(btw I agree that an Optional makes sense for timeCurrentIdlingStarted())

On Mon, Mar 1, 2021 at 11:46 AM Matthias J. Sax  wrote:

> Thanks the updating the KIP Walker.
>
> About, `timeCurrentIdlingStarted()`: should we return an `Optional`
> instead of `-1` if a task is not idling.
>
>
> As we allow to call `localThreadMetadata()` any time, could it be that
> we report partial information during a rebalance? If yes, this should be
> pointed out, because if one want to implement a health check this needs
> to be taken into account.
>
> -Matthias
>
>
> On 2/27/21 11:32 AM, Walker Carlson wrote:
> > Sure thing Boyang,
> >
> > 1) it is in proposed changes. I expanded on it a bit more now.
> > 2) done
> > 3) and done :)
> >
> > thanks for the suggestions,
> > walker
> >
> > On Fri, Feb 26, 2021 at 3:10 PM Boyang Chen 
> > wrote:
> >
> >> Thanks Walker. Some minor comments:
> >>
> >> 1. Could you add a reference to localThreadMetadata method in the KIP?
> >> 2. Could you make the code block as a java template, such that
> >> TaskMetadata.java could be as the template title? Also it would be good
> to
> >> add some meta comments about the newly added functions.
> >> 3. Could you write more details about rejected alternatives? Just as
> why we
> >> don't choose to expose as metrics, and how a new method on KStream is
> not
> >> favorable. These would be valuable when we look back on our design
> >> decisions.
> >>
> >> On Fri, Feb 26, 2021 at 11:23 AM Walker Carlson 
> >> wrote:
> >>
> >>> I understand now. I think that is a valid concern but I think it is
> best
> >>> solved but having an external service verify through streams. As this
> KIP
> >>> is now just adding fields to TaskMetadata to be returned in the
> >>> threadMetadata I am going to say that is out of scope.
> >>>
> >>> That seems to be the last concern. If there are no others I will put
> this
> >>> up for a vote soon.
> >>>
> >>> walker
> >>>
> >>> On Thu, Feb 25, 2021 at 12:35 PM Boyang Chen <
> reluctanthero...@gmail.com
> >>>
> >>> wrote:
> >>>
>  For the 3rd point, yes, what I'm proposing is an edge case. For
> >> example,
>  when we have 4 tasks [0_0, 0_1, 1_0, 1_1], and a bug in rebalancing
> >> logic
>  causing no one gets 1_1 assigned. Then the health check service will
> >> only
>  see 3 tasks [0_0, 0_1, 1_0] reporting progress normally while not
> >> paying
>  attention to 1_1. What I want to expose is a "logical global" view of
> >> all
>  the tasks through the stream instance, since each instance gets the
>  assigned topology and should be able to infer all the exact tasks to
> be
> >>> up
>  and running when the service is healthy.
> 
>  On Thu, Feb 25, 2021 at 11:25 AM Walker Carlson <
> wcarl...@confluent.io
> >>>
>  wrote:
> 
> > Thanks for the follow up Boyang and Guozhang,
> >
> > I have updated the kip to include these ideas.
> >
> > Guozhang, that is a good idea about using the TaskMetadata. We can
> >> get
> >>> it
> > through the ThreadMetadata with a minor change to
> >> `localThreadMetadata`
>  in
> > kafkaStreams. This means that we will only need to update
> >> TaskMetadata
>  and
> > add no other APIs
> >
> > Boyang, since each TaskMetadata contains the TaskId and
> >>> TopicPartitions I
> > don't believe mapping either way will be a problem. Also I think we
> >> can
>  do
> > something like record the time the task started idling and when it
> >>> stops
> > idling we can override it to -1. I think that should clear up the
> >> first
>  two
> > points.
> >
> > As for your third point I am not sure I 100% understand. The
>  ThreadMetadata
> > will contain a set of all task assigned to that thread. Any health
> >>> check
> > service will just need to query all clients and aggregate their
> >>> responses
> > to get a complete picture of all tasks correct?
> >
> > walker
> >
> > On Thu, Feb 25, 2021 at 9:57 AM Guozhang Wang 
>  wrote:
> >
> >> Regarding the second API and the `TaskStatus` class: I'd suggest we
> >> consolidate on the existing `TaskMetadata` since we have already
> >> accumulated a bunch of such classes, and its better to keep them
> >>> small
>  as
> >> public APIs. You can see
> > https://issues.apache.org/jira/browse/KAFKA-12370
> >> for a reference and a proposal.
> >>
> >> On Thu, Feb 25, 2021 at 9:40 AM Boyang Chen <
>  reluctanthero...@gmail.com>
> >> wrote:
> >>
> >>> Thanks 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #526

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove stack trace of the lock exception in a debug log4j 
(#10231)

[github] KAFKA-12323 Follow-up: Refactor the unit test a bit (#10205)


--
[...truncated 7.30 MB...]

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerSt

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #556

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12323 Follow-up: Refactor the unit test a bit (#10205)


--
[...truncated 3.65 MB...]

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@3955a07, value = [B@6af00d80), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@3955a07, value = [B@6af00d80), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@615ee3c9, value = [B@58ff3b99), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader

Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #45

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[Guozhang Wang] KAFKA-12323 Follow-up: Refactor the unit test a bit (#10205)


--
[...truncated 3.62 MB...]
SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone() PASSED

SocketServerTest > controlThrowable() STARTED

SocketServerTest > controlThrowable() PASSED

SocketServerTest > testRequestMetricsAfterStop() STARTED

SocketServerTest > testRequestMetricsAfterStop() PASSED

SocketServerTest > testConnectionIdReuse() STARTED

SocketServerTest > testConnectionIdReuse() PASSED

SocketServerTest > testClientInformationWithOldestApiVersionsRequest() STARTED

SocketServerTest > testClientInformationWithOldestApiVersionsRequest() PASSED

SocketServerTest > testSaslReauthenticationFailureNoKip152SaslAuthenticate() 
STARTED

SocketServerTest > testSaslReauthenticationFailureNoKip152SaslAuthenticate() 
PASSED

SocketServerTest > testClientDisconnectionUpdatesRequestMetrics() STARTED

SocketServerTest > testClientDisconnectionUpdatesRequestMetrics() PASSED

SocketServerTest > testProcessorMetricsTags() STARTED

SocketServerTest > testProcessorMetricsTags() PASSED

SocketServerTest > remoteCloseWithBufferedReceivesFailedSend() STARTED

SocketServerTest > remoteCloseWithBufferedReceivesFailedSend() PASSED

SocketServerTest > testMaxConnectionsPerIp() STARTED

SocketServerTest > testMaxConnectionsPerIp() PASSED

SocketServerTest > testConnectionId() STARTED

SocketServerTest > testConnectionId() PASSED

SocketServerTest > remoteCloseSendFailure() STARTED

SocketServerTest > remoteCloseSendFailure() PASSED

SocketServerTest > testBrokerSendAfterChannelClosedUpdatesRequestMetrics() 
STARTED

SocketServerTest > testBrokerSendAfterChannelClosedUpdatesRequestMetrics() 
PASSED

SocketServerTest > 
testControlPlaneTakePrecedenceOverInterBrokerListenerAsPrivilegedListener() 
STARTED

SocketServerTest > 
testControlPlaneTakePrecedenceOverInterBrokerListenerAsPrivilegedListener() 
PASSED

SocketServerTest > testNoOpAction() STARTED

SocketServerTest > testNoOpAction() PASSED

SocketServerTest > simpleRequest() STARTED

SocketServerTest > simpleRequest() PASSED

SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress() STARTED

SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress() PASSED

SocketServerTest > testIdleConnection() STARTED

SocketServerTest > testIdleConnection() PASSED

SocketServerTest > remoteCloseWithoutBufferedReceives() STARTED

SocketServerTest > remoteCloseWithoutBufferedReceives() PASSED

SocketServerTest > remoteCloseWithCompleteAndIncompleteBufferedReceives() 
STARTED

SocketServerTest > remoteCloseWithCompleteAndIncompleteBufferedReceives() PASSED

SocketServerTest > testZeroMaxConnectionsPerIp() STARTED

SocketServerTest > testZeroMaxConnectionsPerIp() PASSED

SocketServerTest > testClientInformationWithLatestApiVersionsRequest() STARTED

SocketServerTest > testClientInformationWithLatestApiVersionsRequest() PASSED

SocketServerTest > testMetricCollectionAfterShutdown() STARTED

SocketServerTest > testMetricCollectionAfterShutdown() PASSED

SocketServerTest > testSessionPrincipal() STARTED

SocketServerTest > testSessionPrincipal() PASSED

SocketServerTest > configureNewConnectionException() STARTED

SocketServerTest > configureNewConnectionException() PASSED

SocketServerTest > testSaslReauthenticationFailureWithKip152SaslAuthenticate() 
STARTED

SocketServerTest > testSaslReauthenticationFailureWithKip152SaslAuthenticate() 
PASSED

SocketServerTest > testMaxConnectionsPerIpOverrides() STARTED

SocketServerTest > testMaxConnectionsPerIpOverrides() PASSED

SocketServerTest > processNewResponseException() STARTED

SocketServerTest > processNewResponseException() PASSED

SocketServerTest > remoteCloseWithIncompleteBufferedReceive() STARTED

SocketServerTest > remoteCloseWithIncompleteBufferedReceive() PASSED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() STARTED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() PASSED

SocketServerTest > testStagedListenerStartup() STARTED

SocketServerTest > testStagedListenerStartup() PASSED

SocketServerTest > testConnectionRateLimit() STARTED

SocketServerTest > testConnectionRateLimit() PASSED

SocketServerTest > testConnectionRatePerIp() STARTED

SocketServerTest > testConnectionRatePerIp() PASSED

SocketServerTest > processCompletedSendException() STARTED

SocketServerTest > processCompletedSendException() PASSED

SocketServerTest > processDisconnectedException() STARTED

SocketServerTest > processDisconnectedException() PASSED

SocketServerTest > closingChannelWithBufferedReceives() STARTED

SocketServerTest > closingChannelWithBufferedReceives() PASSED

SocketServerTest > sendCancelledKeyException() STARTED

SocketServerTest > sendCancelledKeyException() PASSED

So

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #582

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Word count should account for extra whitespaces between words 
(#10229)

[github] MINOR: Remove stack trace of the lock exception in a debug log4j 
(#10231)

[github] KAFKA-12323 Follow-up: Refactor the unit test a bit (#10205)


--
[...truncated 3.65 MB...]

ProducerFailureHandlingTest > testCannotSendToInternalTopic() PASSED

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() STARTED

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() PASSED

ProducerFailureHandlingTest > testWrongBrokerList() STARTED

ProducerFailureHandlingTest > testWrongBrokerList() PASSED

ProducerFailureHandlingTest > testNotEnoughReplicas() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicas() PASSED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
PASSED

ProducerFailureHandlingTest > testNonExistentTopic() STARTED

ProducerFailureHandlingTest > testNonExistentTopic() PASSED

ProducerFailureHandlingTest > testInvalidPartition() STARTED

ProducerFailureHandlingTest > testInvalidPartition() PASSED

ProducerFailureHandlingTest > testSendAfterClosed() STARTED

ProducerFailureHandlingTest > testSendAfterClosed() PASSED

ProducerFailureHandlingTest > testTooLargeRecordWithAckZero() STARTED

ProducerFailureHandlingTest > testTooLargeRecordWithAckZero() PASSED

ProducerFailureHandlingTest > testPartitionTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > testPartitionTooLargeForReplicationWithAckAll() 
PASSED

ProducerFailureHandlingTest > testNotEnoughReplicasAfterBrokerShutdown() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicasAfterBrokerShutdown() PASSED

ApiVersionTest > testApiVersionUniqueIds() STARTED

ApiVersionTest > testApiVersionUniqueIds() PASSED

ApiVersionTest > testMinSupportedVersionFor() STARTED

ApiVersionTest > testMinSupportedVersionFor() PASSED

ApiVersionTest > testShortVersion() STARTED

ApiVersionTest > testShortVersion() PASSED

ApiVersionTest > 
shouldReturnAllKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
STARTED

ApiVersionTest > 
shouldReturnAllKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
PASSED

ApiVersionTest > testApply() STARTED

ApiVersionTest > testApply() PASSED

ApiVersionTest > testMetadataQuorumApisAreDisabled() STARTED

ApiVersionTest > testMetadataQuorumApisAreDisabled() PASSED

ApiVersionTest > 
shouldReturnFeatureKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
STARTED

ApiVersionTest > 
shouldReturnFeatureKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
PASSED

ApiVersionTest > shouldCreateApiResponseOnlyWithKeysSupportedByMagicValue() 
STARTED

ApiVersionTest > shouldCreateApiResponseOnlyWithKeysSupportedByMagicValue() 
PASSED

ApiVersionTest > testApiVersionValidator() STARTED

ApiVersionTest > testApiVersionValidator() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
PASSED

ControllerContextTest > testPreferredReplicaImbalanceMetric() STARTED

ControllerContextTest > testPreferredReplicaImbalanceMetric() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() PASSED

ControllerContextTest > testReassignTo() STARTED

ControllerContextTest > testReassignTo() PASSED

ControllerContextTest > testPartitionReplicaAssignment() STARTED

ControllerContextTest > testPartitionReplicaAssignment() PASSED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() STARTED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() PASSED

ControllerContextTest > testReassignToIdempotence() STARTED

ControllerContextTest > testReassignToIdempotence() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordH

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #129

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[Guozhang Wang] KAFKA-12323 Follow-up: Refactor the unit test a bit (#10205)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
sh

[jira] [Created] (KAFKA-12396) Dedicated exception for kstreams when null key received

2021-03-01 Thread Veniamin Kalegin (Jira)
Veniamin Kalegin created KAFKA-12396:


 Summary: Dedicated exception for kstreams when null key received
 Key: KAFKA-12396
 URL: https://issues.apache.org/jira/browse/KAFKA-12396
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.6.0
Reporter: Veniamin Kalegin


If kstreams application received null as a key (thanks to QA), kstream app 
gives long and confusing stack trace, it would be nice to have shorter and 
specific exception instead of
{{org.apache.kafka.streams.errors.StreamsException: Exception caught in 
process. taskId=0_0, processor=KSTREAM-SOURCE-00, topic=(hidden), 
partition=0, offset=3722, stacktrace=java.lang.NullPointerException}}
 at 
org.apache.kafka.streams.state.internals.RocksDBStore.get(RocksDBStore.java:286)
 at 
org.apache.kafka.streams.state.internals.RocksDBStore.get(RocksDBStore.java:74)
 at 
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.get(ChangeLoggingKeyValueBytesStore.java:94)
 at 
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.get(ChangeLoggingKeyValueBytesStore.java:29)
 at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$get$2(MeteredKeyValueStore.java:133)
 at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore$$Lambda$1048/0x60630fd0.get(Unknown
 Source)
 at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:851)
 at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.get(MeteredKeyValueStore.java:133)
 at 
org.apache.kafka.streams.processor.internals.AbstractReadWriteDecorator$KeyValueStoreReadWriteDecorator.get(AbstractReadWriteDecorator.java:78)
 at 
org.apache.kafka.streams.kstream.internals.KStreamTransformValues$KStreamTransformValuesProcessor.process(KStreamTransformValues.java:64)
 at 
org.apache.kafka.streams.processor.internals.ProcessorNode.lambda$process$2(ProcessorNode.java:142)
 at 
org.apache.kafka.streams.processor.internals.ProcessorNode$$Lambda$1047/0x60630b10.run(Unknown
 Source)
 at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
 at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:142)
 at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:236)
 at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:216)
 at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:168)
 at 
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:96)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.lambda$process$1(StreamTask.java:679)
 at 
org.apache.kafka.streams.processor.internals.StreamTask$$Lambda$1046/0x605250f0.run(Unknown
 Source)
 at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:679)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1033)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:690)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:696)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1033)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:690)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
Caused by: java.lang.NullPointerException
 at 
org.apache.kafka.streams.state.internals.RocksDBStore.get(RocksDBStore.java:286)
 at 
org.apache.kafka.streams.state.internals.RocksDBStore.get(RocksDBStore.java:74)
 at 
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.get(ChangeLoggingKeyValueBytesStore.java:94)
 at 
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.get(ChangeLoggingKeyValueBytesStore.java:29)
 at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$get$2(MeteredKeyValueStore.java:133)
 at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore$$Lambda$1048/0x60630fd0.get(Unknown
 Source)
 at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:851)
 at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #555

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR; Small refactor in `GroupMetadata` (#10236)

[github] MINOR: Word count should account for extra whitespaces between words 
(#10229)

[github] MINOR: Remove stack trace of the lock exception in a debug log4j 
(#10231)


--
[...truncated 3.65 MB...]

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@3d60e708, value = [B@72ee7910), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@3d60e708, value = [B@72ee7910), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@53d345be, value = [B@701854d6), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, 

Re: [DISCUSS] KIP-708: Rack aware Kafka Streams with pluggable StandbyTask assignor

2021-03-01 Thread Bruno Cadonna

Thank Levani for the update on the KIP!

The KIP looks good!

I have just a couple of comments.

1. Could you try to formulate "The ideal distribution means there is no 
repeated client dimension amongst clients assigned to the active task 
and all standby tasks." a bit differently. I find it a bit hard to 
grasp. What do you think about something like: "With an ideal task 
distribution each client of the set of clients that host a given active 
task and the corresponding standby replicas has a unique value for each 
tag with regard to the other clients in the set."? Maybe you can find a 
better way to formulate it.


2. Could you also add an example for the ideal task distribution? I 
think you had one in a previous version of the KIP. Since this is the 
only requirement on the implementation, I think it makes sense to 
highlight it with an example.


3. I think it is really good to state that the implementation should not 
fail if the assignment cannot completely satisfy the tag constraints. 
However, I would not state that Kafka Streams will "try to find the 
subsequent most optimal distribution" in that case, because we do 
actually not define a total order on possible distributions. I would 
simply state that Streams will distribute the stand-by replicas on a 
best-effort basis.



4. I would move section "Benefits of tags vs single rack.id 
configuration" to section "Rejected Alternatives".


5. I would delete:

", specifically: HighAvailabilityTaskAssignor#assignStandbyReplicaTasks 
and HighAvailabilityTaskAssignor#assignStandbyTaskMovements".


Those are private methods that might easily change in future.

6. Could you describe the changes to SubscriptionInfo as it is done in 
KIP-441. I think that style is a bit better readable and we used it in 
most of the Streams' KIPs.


7. Could you add a method clientTagPrefix() to StreamsConfig similar to 
consumerPrefix()? I seems like we always add both, the constant for the 
prefix and the method.



Best,
Bruno


On 28.02.21 17:36, Ryanne Dolan wrote:

Thanks Levani for the explanation. I think I understand.

Is "rack" still a useful term in this context? I think my concept of "rack"
made it hard for me to wrap my head around the multiple tags approach. For
example, how can a node be in different racks at the same time? And why
would multiple dimensions be needed to describe a rack? I see from your
example that this is not the intention, but the terminology made me think
it was.

Could just be me.

Ryanne


On Sun, Feb 28, 2021, 6:18 AM Levani Kokhreidze 
wrote:


Hello Ryanne,

Thanks for the question.
Tag approach gives more flexibility, which otherwise could have been only
possible with pluggable custom logic Kafka Streams's user must provide (it
is briefly described in "Rejected Alternatives" section).
For instance, if we append multiple tags to form a single rack, it may not
give desired distribution to the user if the infrastructure topology is
more complex.
Let us consider the following example with appending multiple tags to form
the single rack.
Node-1:
rack.id: K8s_Cluster1-eu-central-1a
num.standby.replicas: 1

Node-2:
rack.id: K8s_Cluster1-eu-central-1b
num.standby.replicas: 1

Node-3:
rack.id: K8s_Cluster1-eu-central-1c
num.standby.replicas: 1

Node-4:
rack.id: K8s_Cluster2-eu-central-1a
num.standby.replicas: 1

Node-5:
rack.id: K8s_Cluster2-eu-central-1b
num.standby.replicas: 1

Node-6:
rack.id: K8s_Cluster2-eu-central-1c
num.standby.replicas: 1

In the example mentioned above, we have three AZs and two Kubernetes
clusters. Our use-case is to distribute standby task in the different
Kubernetes cluster and different availability zone.
For instance, if the active task is in Node1 (K8s_Cluster1-eu-central-1a),
the corresponding standby task should be in either
Node-5(K8s_Cluster2-eu-central-1b) or Node-6(K8s_Cluster2-eu-central-1c).
Unfortunately, without custom logic provided by the user, this would be
very hard to achieve with a single "rack.id" configuration. Because
without any input from the user, Kafka Streams might as well allocate
standby task for the active task either:
In the same Kubernetes cluster and different AZ (Node-2, Node-3)
In the different Kubernetes cluster but the same AZ (Node-4)
On the other hand, with the combination of the new "client.tag.*" and
"task.assignment.rack.awareness" configurations, standby task distribution
algorithm will be able to figure out what will be the most optimal
distribution by balancing the standby tasks over each client.tag dimension
individually. And it can be achieved by simply providing necessary
configurations to Kafka Streams.
The flow was described in more details in previous versions of the KIP,
but I've omitted the KIP algorithm implementation details based on received
feedback. But I acknowledge that this information can be put in the KIP for
better clarity. I took the liberty of updating the KIP with the example
mentioned above [1].
I hope this answeres your question.

Regards,
Lev

Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #581

2021-03-01 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12395) Drop topic mapKey in DeleteTopics response

2021-03-01 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12395:
---

 Summary: Drop topic mapKey in DeleteTopics response
 Key: KAFKA-12395
 URL: https://issues.apache.org/jira/browse/KAFKA-12395
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


Now that DeleteTopic requests/responses may be keyed by topicId, the use of the 
the topic name as a map key in the response makes less sense. We should 
consider dropping it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #525

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR; Small refactor in `GroupMetadata` (#10236)

[github] MINOR: Word count should account for extra whitespaces between words 
(#10229)


--
[...truncated 3.65 MB...]

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateMana

[jira] [Created] (KAFKA-12394) Consider topic id existence and authorization errors

2021-03-01 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12394:
---

 Summary: Consider topic id existence and authorization errors
 Key: KAFKA-12394
 URL: https://issues.apache.org/jira/browse/KAFKA-12394
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


We have historically had logic in the api layer to avoid leaking the existence 
or non-existence of topics to clients which are not authorized to describe 
them. The way we have done this is to always authorize the topic name first 
before checking existence.

Topic ids make this more difficult because the resource (ie the topic name) has 
to be derived. This means we have to check existence of the topic first. If the 
topic does not exist, then our hands are tied and we have to return 
UNKNOWN_TOPIC_ID. If the topic does exist, then we need to check if the client 
is authorized to describe it. The question comes then what we should do if the 
client is not authorized?

The current behavior is to return UNKNOWN_TOPIC_ID. The downside is that this 
is misleading and forces the client to retry even though they are doomed to hit 
the same error. However, the client should generally handle this by requesting 
Metadata using the topic name that they are interested in, which would give 
them a chance to see the topic authorization error. Basically the fact that you 
need describe permission in the first place to discover the topic id makes this 
an unlikely scenario.

There is an argument to be made for TOPIC_AUTHORIZATION_FAILED as well. 
Basically we could take the stance that we do not care about leaking the 
existence of topic IDs since they do not reveal anything about the underlying 
topic. Additionally, there is little likelihood of a user discovering a valid 
UUID by accident or even through brute force. The benefit of this is that users 
get a clear error for cases where a topic Id may have been discovered through 
some external means. For example, an administrator finds a topic ID in the 
logging and attempts to delete it using the new `deleteTopicsWithIds` Admin API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #554

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10340: Proactively close producer when cancelling source tasks 
(#10016)


--
[...truncated 7.33 MB...]

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() PASSED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() PASSED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() STARTED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

MetricsTest > testUpdateJMXFilter() STARTED

MetricsTest > testUpdateJMXFilter() PASSED

MetricsTest > testGeneralBrokerTopicMetricsAreGreedilyRegistered() STARTED

MetricsTest > testGeneralBrokerTopicMetricsAreGreedilyRegistered() PASSED

MetricsTest > testLinuxIoMetrics() STARTED

MetricsTest > testLinuxIoMetrics() PASSED

MetricsTest > testMetricsReporterAfterDeletingTopic() STARTED

MetricsTest > testMetricsReporterAfterDeletingTopic() PASSED

MetricsTest > testSessionExpireListenerMetrics() STARTED

MetricsTest > testSessionExpireListenerMetrics() PASSED

MetricsTest > testBrokerTopicMetricsUnregisteredAfterDeletingTopic() STARTED

MetricsTest > testBrokerTopicMetricsUnregisteredAfterDeletingTopic() PASSED

MetricsTest > testYammerMetricsCountMetric() STARTED

MetricsTest > testYammerMetricsCountMetric() PASSED

MetricsTest > testClusterIdMetric() STARTED

MetricsTest > testClusterIdMetric() PASSED

MetricsTest > testControllerMetrics() STARTED

MetricsTest > testControllerMetrics() PASSED

MetricsTest > testWindowsStyleTagNames() STARTED

MetricsTest > testWindowsStyleTagNames() PASSED

MetricsTest > testBrokerStateMetric() STARTED

MetricsTest > testBrokerStateMetric() PASSED

MetricsTest > testBrokerTopicMetricsBytesInOut() STARTED

MetricsTest > testBrokerTopicMetricsBytesInOut() PASSED

MetricsTest > testJMXFilter() STARTED

MetricsTest > testJMXFilter() PASSED

KafkaTimerTest > testKafkaTimer() STARTED

KafkaTimerTest > testKafkaTimer() PASSED

LinuxIoMetricsCollectorTest > testReadProcFile() STARTED

LinuxIoMetricsCollectorTest > testReadPr

Re: [DISCUSS] KIP-715: Expose Committed offset in streams

2021-03-01 Thread Matthias J. Sax
Thanks the updating the KIP Walker.

About, `timeCurrentIdlingStarted()`: should we return an `Optional`
instead of `-1` if a task is not idling.


As we allow to call `localThreadMetadata()` any time, could it be that
we report partial information during a rebalance? If yes, this should be
pointed out, because if one want to implement a health check this needs
to be taken into account.

-Matthias


On 2/27/21 11:32 AM, Walker Carlson wrote:
> Sure thing Boyang,
> 
> 1) it is in proposed changes. I expanded on it a bit more now.
> 2) done
> 3) and done :)
> 
> thanks for the suggestions,
> walker
> 
> On Fri, Feb 26, 2021 at 3:10 PM Boyang Chen 
> wrote:
> 
>> Thanks Walker. Some minor comments:
>>
>> 1. Could you add a reference to localThreadMetadata method in the KIP?
>> 2. Could you make the code block as a java template, such that
>> TaskMetadata.java could be as the template title? Also it would be good to
>> add some meta comments about the newly added functions.
>> 3. Could you write more details about rejected alternatives? Just as why we
>> don't choose to expose as metrics, and how a new method on KStream is not
>> favorable. These would be valuable when we look back on our design
>> decisions.
>>
>> On Fri, Feb 26, 2021 at 11:23 AM Walker Carlson 
>> wrote:
>>
>>> I understand now. I think that is a valid concern but I think it is best
>>> solved but having an external service verify through streams. As this KIP
>>> is now just adding fields to TaskMetadata to be returned in the
>>> threadMetadata I am going to say that is out of scope.
>>>
>>> That seems to be the last concern. If there are no others I will put this
>>> up for a vote soon.
>>>
>>> walker
>>>
>>> On Thu, Feb 25, 2021 at 12:35 PM Boyang Chen >>
>>> wrote:
>>>
 For the 3rd point, yes, what I'm proposing is an edge case. For
>> example,
 when we have 4 tasks [0_0, 0_1, 1_0, 1_1], and a bug in rebalancing
>> logic
 causing no one gets 1_1 assigned. Then the health check service will
>> only
 see 3 tasks [0_0, 0_1, 1_0] reporting progress normally while not
>> paying
 attention to 1_1. What I want to expose is a "logical global" view of
>> all
 the tasks through the stream instance, since each instance gets the
 assigned topology and should be able to infer all the exact tasks to be
>>> up
 and running when the service is healthy.

 On Thu, Feb 25, 2021 at 11:25 AM Walker Carlson >>
 wrote:

> Thanks for the follow up Boyang and Guozhang,
>
> I have updated the kip to include these ideas.
>
> Guozhang, that is a good idea about using the TaskMetadata. We can
>> get
>>> it
> through the ThreadMetadata with a minor change to
>> `localThreadMetadata`
 in
> kafkaStreams. This means that we will only need to update
>> TaskMetadata
 and
> add no other APIs
>
> Boyang, since each TaskMetadata contains the TaskId and
>>> TopicPartitions I
> don't believe mapping either way will be a problem. Also I think we
>> can
 do
> something like record the time the task started idling and when it
>>> stops
> idling we can override it to -1. I think that should clear up the
>> first
 two
> points.
>
> As for your third point I am not sure I 100% understand. The
 ThreadMetadata
> will contain a set of all task assigned to that thread. Any health
>>> check
> service will just need to query all clients and aggregate their
>>> responses
> to get a complete picture of all tasks correct?
>
> walker
>
> On Thu, Feb 25, 2021 at 9:57 AM Guozhang Wang 
 wrote:
>
>> Regarding the second API and the `TaskStatus` class: I'd suggest we
>> consolidate on the existing `TaskMetadata` since we have already
>> accumulated a bunch of such classes, and its better to keep them
>>> small
 as
>> public APIs. You can see
> https://issues.apache.org/jira/browse/KAFKA-12370
>> for a reference and a proposal.
>>
>> On Thu, Feb 25, 2021 at 9:40 AM Boyang Chen <
 reluctanthero...@gmail.com>
>> wrote:
>>
>>> Thanks for the updates Walker. Some replies and follow-up
>>> questions:
>>>
>>> 1. I agree one task could have multiple partitions, but when we
>>> hit a
>> delay
>>> in terms of offset progress, do we have a convenient way to
>> reverse
>> mapping
>>> TopicPartition to the problematic task? In production, I believe
>> it
> would
>>> be much quicker to identify the problem using task.id instead of
 topic
>>> partition, especially when it points to an internal topic. I
>> think
> having
>>> the task id as part of the entry value seems useful, which means
> getting
>>> something like Map where
>> TaskProgress
>>> contains both committed offsets & task id.
>>>
>>> 2. The task idling API was still confusing. I don't think we care
 about
>> the
>>> exact state when making tasksIdling()que

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #128

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10340: Proactively close producer when cancelling source 
tasks (#10016)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos e

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #524

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10340: Proactively close producer when cancelling source tasks 
(#10016)


--
[...truncated 7.29 MB...]

ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition() STARTED

ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition() PASSED

ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition() STARTED

ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition() PASSED

ReplicaStateMachineTest > testNewReplicaToOnlineReplicaTransition() STARTED

ReplicaStateMachineTest > testNewReplicaToOnlineReplicaTransition() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr() PASSED

PartitionLeaderElectionAlgorithmsTest > testReassignPartitionLeaderElection() 
STARTED

PartitionLeaderElectionAlgorithmsTest > testReassignPartitionLeaderElection() 
PASSED

PartitionLeaderElectionAlgorithmsTest > testOfflinePartitionLeaderElection() 
STARTED

PartitionLeaderElectionAlgorithmsTest > testOfflinePartitionLeaderElection() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
PASSED

TopicDeletionManagerTest > testBrokerFailureAfterDeletionStarted() STARTED

TopicDeletionManagerTest > testBrokerFailureAfterDeletionStarted() PASSED

TopicDeletionManagerTest > testInitialization() STARTED

TopicDeletionManagerTest > testInitialization() PASSED

TopicDeletionManagerTest > testBasicDeletion() STARTED

TopicDeletionManagerTest > testBasicDeletion() PASSED

TopicDeletionManagerTest > testDeletionWithBrokerOffline() STARTED

TopicDeletionManagerTest > testDeletionWithBrokerOffline() PASSED

ControllerFailoverTest > testHandleIllegalStateException() STARTED

ControllerFailoverTest > testHandleIllegalStateException() PASSED

ZkNodeChangeNotificationListenerTest > testProcessNotification() STARTED

ZkNodeChangeNotificationListenerTest > testProcessNotification() PASSED

ZkNodeChangeNotificationListenerTest > testSwallowsProcessorException() STARTED

ZkNodeChangeNotificationListenerTest > testSwallowsProcessorException() PASSED

KafkaTest > testZookeeperKeyStorePassword() STARTED

KafkaTest > testZookeeperKeyStorePassword() PASSED

KafkaTest > testConnectionsMaxReauthMsExplicit() STARTED

KafkaTest > testConnectionsMaxReauthMsExplicit() PASSED

KafkaTest > testKafkaSslPasswordsWithSymbols() STARTED

KafkaTest > testKafkaSslPasswordsWithSymbols() PASSED

Kaf

Re: [VOTE] KIP-715: Expose Committed offset in streams

2021-03-01 Thread Leah Thomas
Hey Walker,

Thanks for leading this discussion. +1 from me, non-binding

Leah

On Mon, Mar 1, 2021 at 12:37 AM Boyang Chen 
wrote:

> Thanks Walker for the proposal, +1 (binding) from me.
>
> On Fri, Feb 26, 2021 at 12:42 PM Walker Carlson 
> wrote:
>
> > Hello all,
> >
> > I would like to bring KIP-715 to a vote. Here is the KIP:
> > https://cwiki.apache.org/confluence/x/aRRRCg.
> >
> > Walker
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #580

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12329; kafka-reassign-partitions command should give a better 
error message when a topic does not exist (#10141)


--
[...truncated 7.34 MB...]
UserQuotaTest > testThrottledRequest() STARTED

UserQuotaTest > testThrottledRequest() PASSED

LinuxIoMetricsCollectorTest > testReadProcFile() STARTED

LinuxIoMetricsCollectorTest > testReadProcFile() PASSED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() STARTED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() PASSED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
STARTED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
PASSED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true STARTED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true PASSED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true STARTED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true PASSED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() STARTED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() PASSED

PartitionTest > testIsrExpansion() STARTED

PartitionTest > testIsrExpansion() PASSED

PartitionTest > testReadRecordEpochValidationForLeader() STARTED

PartitionTest > testReadRecordEpochValidationForLeader() PASSED

PartitionTest > testAlterIsrUnknownTopic() STARTED

PartitionTest > testAlterIsrUnknownTopic() PASSED

PartitionTest > testIsrNotShrunkIfUpdateFails() STARTED

PartitionTest > testIsrNotShrunkIfUpdateFails() PASSED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() STARTED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() PASSED

PartitionTest > testIsrNotExpandedIfUpdateFails() STARTED

PartitionTest > testIsrNotExpandedIfUpdateFails() PASSED

PartitionTest > testLogConfigDirtyAsBrokerUpdated() STARTED

PartitionTest > testLogConfigDirtyAsBrokerUpdated() PASSED

PartitionTest > testAddAndRemoveMetrics() STARTED

PartitionTest > testAddAndRemoveMetrics() PASSED

PartitionTest > tes

Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-03-01 Thread Dhruvil Shah
Hi John,

I would like to bring up https://issues.apache.org/jira/browse/KAFKA-12254
as a blocker candidate for 2.8.0. While this is not a regression, the
issue could lead to data loss in certain cases. The fix is trivial so it
may be worth bringing it into 2.8.0. Let me know what you think.

- Dhruvil

On Mon, Feb 22, 2021 at 7:50 AM John Roesler  wrote:

> Thanks for the heads-up, Chia-Ping,
>
> I agree it would be good to include that fix.
>
> Thanks,
> John
>
> On Mon, 2021-02-22 at 09:48 +, Chia-Ping Tsai wrote:
> > hi John,
> >
> > There is a PR (https://github.com/apache/kafka/pull/10024) fixing
> following test error.
> >
> > 14:00:28 Execution failed for task ':core:test'.
> > 14:00:28 > Process 'Gradle Test Executor 24' finished with non-zero exit
> value 1
> > 14:00:28   This problem might be caused by incorrect test process
> configuration.
> > 14:00:28   Please refer to the test execution section in the User Manual
> at
> >
> > This error obstructs us from running integration tests so I'd like to
> push it to 2.8 branch after it gets approved.
> >
> > Best Regards,
> > Chia-Ping
> >
> > On 2021/02/18 16:23:13, "John Roesler"  wrote:
> > > Hello again, all.
> > >
> > > This is a notice that we are now in Code Freeze for the 2.8 branch.
> > >
> > > From now until the release, only fixes for blockers should be merged
> to the release branch. Fixes for failing tests are allowed and encouraged.
> Documentation-only commits are also ok, in case you have forgotten to
> update the docs for some features in 2.8.0.
> > >
> > > Once we have a green build and passing system tests, I will cut the
> first RC.
> > >
> > > Thank you,
> > > John
> > >
> > > On Sun, Feb 7, 2021, at 09:59, John Roesler wrote:
> > > > Hello all,
> > > >
> > > > I have just cut the branch for 2.8 and sent the notification
> > > > email to the dev mailing list.
> > > >
> > > > As a reminder, the next checkpoint toward the 2.8.0 release
> > > > is Code Freeze on Feb 17th.
> > > >
> > > > To ensure a high-quality release, we should now focus our
> > > > efforts on stabilizing the 2.8 branch, including resolving
> > > > failures, writing new tests, and fixing documentation.
> > > >
> > > > Thanks as always for your contributions,
> > > > John
> > > >
> > > >
> > > > On Wed, 2021-02-03 at 14:18 -0600, John Roesler wrote:
> > > > > Hello again, all,
> > > > >
> > > > > This is a reminder that today is the Feature Freeze
> > > > > deadline. To avoid any last-minute crunch or time-zone
> > > > > unfairness, I'll cut the branch toward the end of the week.
> > > > >
> > > > > Please wrap up your features and transition fully into a
> > > > > stabilization mode. The next checkpoint is Code Freeze on
> > > > > Feb 17th.
> > > > >
> > > > > Thanks as always for all of your contributions,
> > > > > John
> > > > >
> > > > > On Wed, 2021-01-27 at 12:17 -0600, John Roesler wrote:
> > > > > > Hello again, all.
> > > > > >
> > > > > > This is a reminder that *today* is the KIP freeze for Apache
> > > > > > Kafka 2.8.0.
> > > > > >
> > > > > > The next checkpoint is the Feature Freeze on Feb 3rd.
> > > > > >
> > > > > > When considering any last-minute KIPs today, please be
> > > > > > mindful of the scope, since we have only one week to merge a
> > > > > > stable implementation of the KIP.
> > > > > >
> > > > > > For those whose KIPs have been accepted already, please work
> > > > > > closely with your reviewers so that your features can be
> > > > > > merged in a stable form in before the Feb 3rd cutoff. Also,
> > > > > > don't forget to update the documentation as part of your
> > > > > > feature.
> > > > > >
> > > > > > Finally, as a gentle reminder to all contributors. There
> > > > > > seems to have been a recent increase in test and system test
> > > > > > failures. Please take some time starting now to stabilize
> > > > > > the codebase so we can ensure a high quality and timely
> > > > > > 2.8.0 release!
> > > > > >
> > > > > > Thanks to all of you for your contributions,
> > > > > > John
> > > > > >
> > > > > > On Sat, 2021-01-23 at 18:15 +0300, Ivan Ponomarev wrote:
> > > > > > > Hi John,
> > > > > > >
> > > > > > > KIP-418 is already implemented and reviewed, but I don't see
> it in the
> > > > > > > release plan. Can it be added?
> > > > > > >
> > > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-418%3A+A+method-chaining+way+to+branch+KStream
> > > > > > >
> > > > > > > Regards,
> > > > > > >
> > > > > > > Ivan
> > > > > > >
> > > > > > > 22.01.2021 21:49, John Roesler пишет:
> > > > > > > > Sure thing, Leah!
> > > > > > > > -John
> > > > > > > > On Thu, Jan 21, 2021, at 07:54, Leah Thomas wrote:
> > > > > > > > > Hi John,
> > > > > > > > >
> > > > > > > > > KIP-659 was just accepted as well, can it be added to the
> release plan?
> > > > > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-659%3A+Improve+TimeWindowedDeserializer+and+TimeWindowedSerde+to+handle+window+size
> > > > > > > > >
> > > > > > > 

[GitHub] [kafka-site] miguno commented on pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-01 Thread GitBox


miguno commented on pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#issuecomment-788089637


   cc to committer @rajinisivaram as the SME on this subject, and who has the 
most context



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-567: Kafka Cluster Audit (new discussion)

2021-03-01 Thread Viktor Somogyi-Vass
Hey Everyone,

Tried out a new format to get some attention and also to make understanding
easier, so I recorded a 15 min long video about this KIP.
https://www.youtube.com/watch?v=uOJTyAEJmB8&feature=youtu.be

Sorry for the sound quality but recording a video isn't a thing for me and
also I look like someone who is being interrogated but again, I don't often
do this. :)

If there are no further objections I will start a voting in a few days.

Viktor

On Tue, Feb 2, 2021 at 1:54 PM Viktor Somogyi-Vass 
wrote:

> Hi all,
>
> I have updated the interfaces. I managed to shrink the required number of
> entities. Basically I store the event type with the event, therefore we can
> cover all topic related events (create, delete, change) with one event type.
>
> I think if on-one has objections then I'll start a vote soon.
>
> Viktor
>
> On Thu, Oct 29, 2020 at 5:15 PM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com> wrote:
>
>> Hi Tom.
>>
>> Sorry for the delay.
>> Answering your points:
>>
>> > Why is it necessary to introduce this interface to produce the audit
>> trail
>> > when there is logging that can already record a lot of the same
>> > information, albeit in less structured form? If logging isn't adequate
>> it
>> > would be good to explain why not in the Motivation or Rejected
>> Alternatives
>> > section. It's worth pointing out that even the "less structured" part
>> would
>> > be helped by KIP-673, which proposes to change the RequestChannel's
>> logging
>> > to include a JSON representation of the request.
>>
>> We will need authorization details as would an auditor normally have them
>> but a request logger doesn't as you correctly pointed out later in your
>> reply. They would also appear at different lifecycle points I imagine, like
>> the request logger is probably when the request enters Kafka and the
>> auditor catches them before sending the response, so it can obtain all
>> information (authorization, execution).
>> Furthermore this auditing API would specifically target other JVM based
>> components that depend on Kafka (like Ranger or Atlas) and from both side's
>> perspective it's much better to expose Java level classes rather than a
>> lower level (JSON) implementation. If a Java level object is exposed then
>> we need to create them once during request processing which is fairly
>> low-fat since we're parsing the request most of the time anyways as opposed
>> to JSON which would need to be serialized first and then deserialized for
>> the consumer of the API.
>>
>> > I'm guessing what you gain from the proposed interface is the fact that
>> > it's called after the authorizer (perhaps after the request has been
>> > handled: I'm unclear about the purpose of AuditInfo.error), so you could
>> > generate a single record in the audit trail. That could still be
>> achieved
>> > using logging, either by correlating existing log messages or by
>> proposing
>> > some new logging just for this auditing purpose (perhaps with a logger
>> per
>> > API key so people could avoid the performance hit on the produce and
>> fetch
>> > paths if they weren't interested in auditing those things). Again, if
>> this
>> > doesn't work it would be great for the KIP to explain why.
>>
>> AuditInfo.error serves for capturing the possible errors that could
>> happen during the authorization and execution of the request. For instance
>> a partition creation request could be authorized and then rejected
>> with INVALID_TOPIC_EXCEPTION because the topic is queued for deletion. In
>> this case the AuditInfo.error would contain this API error thus emitting
>> information about the failure of the request. With normal auditing that
>> looks at only the authorization information we wouldn't detect it.
>> Regarding the produce and fetch performance: for these kinds of requests
>> I don't think we should enable parsing the batches themselves, just only
>> pass some meta information like which topics and partitions are affected.
>> These are parsed anyways for log reading and writing. Also similarly to the
>> authorizer we need to require implementations to run the auditing logic on
>> a different thread to minimize the performance impact.
>>
>> > To me there were parallels with previous discussions about broker-side
>> > interceptors (
>> > https://www.mail-archive.com/dev@kafka.apache.org/msg103310.html if
>> you've
>> > not seen it before), those seemed to founder on the unwillingness to
>> make
>> > the request internal classes into a supported API. You've tried to
>> address
>> > this by proposing a parallel set of classes implementing AuditEvent for
>> > exposing selective details about the request. It's not really clear that
>> > you really _need_ to access all that information about each request,
>> rather
>> > than simply recording it all, and it would also come with a significant
>> > implementation and maintenance cost. If it's simply about recording all
>> the
>> > information in the request, then it would likely

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #523

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12329; kafka-reassign-partitions command should give a better 
error message when a topic does not exist (#10141)


--
[...truncated 3.65 MB...]

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProdu

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #553

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12329; kafka-reassign-partitions command should give a better 
error message when a topic does not exist (#10141)


--
[...truncated 3.66 MB...]

SocketServerTest > remoteCloseSendFailure() STARTED

SocketServerTest > remoteCloseSendFailure() PASSED

SocketServerTest > testBrokerSendAfterChannelClosedUpdatesRequestMetrics() 
STARTED

SocketServerTest > testBrokerSendAfterChannelClosedUpdatesRequestMetrics() 
PASSED

SocketServerTest > 
testControlPlaneTakePrecedenceOverInterBrokerListenerAsPrivilegedListener() 
STARTED

SocketServerTest > 
testControlPlaneTakePrecedenceOverInterBrokerListenerAsPrivilegedListener() 
PASSED

SocketServerTest > testNoOpAction() STARTED

SocketServerTest > testNoOpAction() PASSED

SocketServerTest > simpleRequest() STARTED

SocketServerTest > simpleRequest() PASSED

SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress() STARTED

SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress() PASSED

SocketServerTest > testIdleConnection() STARTED

SocketServerTest > testIdleConnection() PASSED

SocketServerTest > remoteCloseWithoutBufferedReceives() STARTED

SocketServerTest > remoteCloseWithoutBufferedReceives() PASSED

SocketServerTest > remoteCloseWithCompleteAndIncompleteBufferedReceives() 
STARTED

SocketServerTest > remoteCloseWithCompleteAndIncompleteBufferedReceives() PASSED

SocketServerTest > testZeroMaxConnectionsPerIp() STARTED

SocketServerTest > testZeroMaxConnectionsPerIp() PASSED

SocketServerTest > testClientInformationWithLatestApiVersionsRequest() STARTED

SocketServerTest > testClientInformationWithLatestApiVersionsRequest() PASSED

SocketServerTest > testMetricCollectionAfterShutdown() STARTED

SocketServerTest > testMetricCollectionAfterShutdown() PASSED

SocketServerTest > testSessionPrincipal() STARTED

SocketServerTest > testSessionPrincipal() PASSED

SocketServerTest > configureNewConnectionException() STARTED

SocketServerTest > configureNewConnectionException() PASSED

SocketServerTest > testSaslReauthenticationFailureWithKip152SaslAuthenticate() 
STARTED

SocketServerTest > testSaslReauthenticationFailureWithKip152SaslAuthenticate() 
PASSED

SocketServerTest > testMaxConnectionsPerIpOverrides() STARTED

SocketServerTest > testMaxConnectionsPerIpOverrides() PASSED

SocketServerTest > processNewResponseException() STARTED

SocketServerTest > processNewResponseException() PASSED

SocketServerTest > remoteCloseWithIncompleteBufferedReceive() STARTED

SocketServerTest > remoteCloseWithIncompleteBufferedReceive() PASSED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() STARTED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() PASSED

SocketServerTest > testStagedListenerStartup() STARTED

SocketServerTest > testStagedListenerStartup() PASSED

SocketServerTest > testConnectionRateLimit() STARTED

SocketServerTest > testConnectionRateLimit() PASSED

SocketServerTest > testConnectionRatePerIp() STARTED

SocketServerTest > testConnectionRatePerIp() PASSED

SocketServerTest > processCompletedSendException() STARTED

SocketServerTest > processCompletedSendException() PASSED

SocketServerTest > processDisconnectedException() STARTED

SocketServerTest > processDisconnectedException() PASSED

SocketServerTest > closingChannelWithBufferedReceives() STARTED

SocketServerTest > closingChannelWithBufferedReceives() PASSED

SocketServerTest > sendCancelledKeyException() STARTED

SocketServerTest > sendCancelledKeyException() PASSED

SocketServerTest > processCompletedReceiveException() STARTED

SocketServerTest > processCompletedReceiveException() PASSED

SocketServerTest > testControlPlaneAsPrivilegedListener() STARTED

SocketServerTest > testControlPlaneAsPrivilegedListener() PASSED

SocketServerTest > closingChannelSendFailure() STARTED

SocketServerTest > closingChannelSendFailure() PASSED

SocketServerTest > idleExpiryWithBufferedReceives() STARTED

SocketServerTest > idleExpiryWithBufferedReceives() PASSED

SocketServerTest > testSocketsCloseOnShutdown() STARTED

SocketServerTest > testSocketsCloseOnShutdown() PASSED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone() STARTED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone() PASSED

SocketServerTest > pollException() STARTED

SocketServerTest > pollException() PASSED

SocketServerTest > closingChannelWithBufferedReceivesFailedSend() STARTED

SocketServerTest > closingChannelWithBufferedReceivesFailedSend() PASSED

SocketServerTest > remoteCloseWithBufferedReceives() STARTED

SocketServerTest > remoteCloseWithBufferedReceives() PASSED

SocketServerTest > testThrottledSocketsClosedOnShutdown() STARTED

SocketServerTest > testThrottledSocketsClosedOnShutdown() PA

[GitHub] [kafka-site] miguno opened a new pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-01 Thread GitBox


miguno opened a new pull request #334:
URL: https://github.com/apache/kafka-site/pull/334


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12393) Document multi-tenancy considerations

2021-03-01 Thread Michael G. Noll (Jira)
Michael G. Noll created KAFKA-12393:
---

 Summary: Document multi-tenancy considerations
 Key: KAFKA-12393
 URL: https://issues.apache.org/jira/browse/KAFKA-12393
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Michael G. Noll
Assignee: Michael G. Noll


We should provide an overview of multi-tenancy consideration (e.g., user 
spaces, security) as the current documentation lacks such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12329) kafka-reassign-partitions command should give a better error message when a topic does not exist

2021-03-01 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-12329.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

> kafka-reassign-partitions command should give a better error message when a 
> topic does not exist
> 
>
> Key: KAFKA-12329
> URL: https://issues.apache.org/jira/browse/KAFKA-12329
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.0.0
>
>
> The `kafka-reassign-partitions` command spits out a generic when the 
> reassignment contains a topic which does not exist:
> {noformat}
> $ ./bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 
> --execute --reassignment-json-file reassignment.json
> Error: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition.
> {noformat}
> When the reassignment contains multiple topic-partitions, this is quite 
> annoying. It would be better if it could at least give the concerned 
> topic-partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-717: Deprecate batch-size config from console producer

2021-03-01 Thread Kamal Chandraprakash
Hi,

Please take a look at KIP-717. Request for your comments.

https://cwiki.apache.org/confluence/x/DB1RCg

Thanks,
Kamal


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #522

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12254: Ensure MM2 creates topics with source topic configs 
(#10217)


--
[...truncated 3.65 MB...]

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
PASS

[jira] [Created] (KAFKA-12392) Deprecate batch-size config from console producer

2021-03-01 Thread Kamal Chandraprakash (Jira)
Kamal Chandraprakash created KAFKA-12392:


 Summary: Deprecate batch-size config from console producer
 Key: KAFKA-12392
 URL: https://issues.apache.org/jira/browse/KAFKA-12392
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 2.7.0
Reporter: Kamal Chandraprakash
Assignee: Kamal Chandraprakash


In console producer, {{batch-size}} option is unused. The console producer 
doesn't batch the messages by count. This config should be removed/deprecated 
as it's not applicable for the new producer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #579

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12254: Ensure MM2 creates topics with source topic configs 
(#10217)


--
[...truncated 3.67 MB...]

UserQuotaTest > testProducerConsumerOverrideUnthrottled() STARTED

UserQuotaTest > testProducerConsumerOverrideUnthrottled() PASSED

UserQuotaTest > testThrottledProducerConsumer() STARTED

UserQuotaTest > testThrottledProducerConsumer() PASSED

UserQuotaTest > testQuotaOverrideDelete() STARTED

UserQuotaTest > testQuotaOverrideDelete() PASSED

UserQuotaTest > testThrottledRequest() STARTED

UserQuotaTest > testThrottledRequest() PASSED

LinuxIoMetricsCollectorTest > testReadProcFile() STARTED

LinuxIoMetricsCollectorTest > testReadProcFile() PASSED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() STARTED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() PASSED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
STARTED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
PASSED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true STARTED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true PASSED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true STARTED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true PASSED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() STARTED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() PASSED

PartitionTest > testIsrExpansion() STARTED

PartitionTest > testIsrExpansion() PASSED

PartitionTest > testReadRecordEpochValidationForLeader() STARTED

PartitionTest > testReadRecordEpochValidationForLeader() PASSED

PartitionTest > testAlterIsrUnknownTopic() STARTED

PartitionTest > testAlterIsrUnknownTopic() PASSED

PartitionTest > testIsrNotShrunkIfUpdateFails() STARTED

PartitionTest > testIsrNotShrunkIfUpdateFails() PASSED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() STARTED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() PASSED

PartitionTest > testIsrNotExpandedIfUpdateFails() START

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #552

2021-03-01 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12254: Ensure MM2 creates topics with source topic configs 
(#10217)


--
[...truncated 3.65 MB...]

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@41034b0e, value = [B@1b9b1620), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@41034b0e, value = [B@1b9b1620), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@72ad8e39, value = [B@548d25b6), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]

Re: [VOTE] KIP-699: Update FindCoordinator to resolve multiple Coordinators at a time

2021-03-01 Thread Tom Bentley
+1 (non-binding), thanks Mickael.

On Thu, Feb 25, 2021 at 6:32 PM Mickael Maison  wrote:

> Hi,
>
> I'd like to start a vote on KIP-699 to support resolving multiple
> coordinators at a time:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+Update+FindCoordinator+to+resolve+multiple+Coordinators+at+a+time
>
> Thanks
>
>


[jira] [Created] (KAFKA-12391) Add an option to store arbitrary metadata to a SourceRecord

2021-03-01 Thread Luca Burgazzoli (Jira)
Luca Burgazzoli created KAFKA-12391:
---

 Summary: Add an option to store arbitrary metadata to a 
SourceRecord
 Key: KAFKA-12391
 URL: https://issues.apache.org/jira/browse/KAFKA-12391
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Luca Burgazzoli


When writing Source Connectors for Kafka, it may be required to perform some 
additional house cleaning when an record has been acknowledged by the Kafka 
broker and as today, it is possible to set up an hook by overriding 
SourceTask.commitRecord(SourceRecord).

This works fine in most of the cases but to make it easy for the source 
connector to perform it's internal house keeping, it would be nice to have an 
option to set some additional metadata to the SourceRecord without having 
impacts to the Record sent to the Kafka Broker, something like:

{code:java}
class SourceRecord {
public SourceRecord(
...,
Map attributes) {
...
this.attributes = attributes;
}

Map attributes() { 
return attributes;
}
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12390) error when storing group assignment during SyncGroup

2021-03-01 Thread Andrey (Jira)
Andrey created KAFKA-12390:
--

 Summary: error when storing group assignment during SyncGroup 
 Key: KAFKA-12390
 URL: https://issues.apache.org/jira/browse/KAFKA-12390
 Project: Kafka
  Issue Type: Bug
  Components: offset manager
Affects Versions: 2.4.0
Reporter: Andrey
 Attachments: negative lag kafka.png

There is a cluster consisting of 3 nodes
Apache Kafka Cluster 2.4.0
messages were observed in the logs of one of the nodes:
Preparing to rebalance group corecft-adapter2 in state PreparingRebalance with 
old generation 5043 (__consumer_offsets-22) (reason: error when storing group 
assignment during SyncGroup (member: 
consumer-5-5f3f994d-7c7c-43d6-8e0b-e011bdc2f9ba)) (kafka. 
coordinator.group.GroupCoordinator)

Messages from the topic were read, but no commit occurred. then there was 
balancing and the data in the topics was "doubled". There was also a "negative" 
lag

there are no error messages in the zookeeper logs

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12389) Upgrade of netty-codec due to CVE-2021-21290

2021-03-01 Thread Dominique Mongelli (Jira)
Dominique Mongelli created KAFKA-12389:
--

 Summary: Upgrade of netty-codec due to CVE-2021-21290
 Key: KAFKA-12389
 URL: https://issues.apache.org/jira/browse/KAFKA-12389
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.0
Reporter: Dominique Mongelli


Our security tool raised the following security flaw on kafka 2.7: 
[https://nvd.nist.gov/vuln/detail/CVE-2021-21290]

It is a vulnerability related to jar *netty-codec-4.1.51.Final.jar*.

Looking at source code, the netty-codec in trunk and 2.7.0 branches are still 
vulnerable.

Based on netty issue tracker, the vulnerability is fixed in 4.1.59.Final: 
https://github.com/netty/netty/security/advisories/GHSA-5mcr-gq6c-3hq2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)