[jira] [Resolved] (KAFKA-10183) MirrorMaker creates duplicate messages in target cluster

2020-06-21 Thread Liraz Sharaby (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liraz Sharaby resolved KAFKA-10183.
---
Resolution: Done

Increasing max.tasks seems to have resolved the issue.

As per documentation example, our max.tasks was set to 1.

> MirrorMaker creates duplicate messages in target cluster
> 
>
> Key: KAFKA-10183
> URL: https://issues.apache.org/jira/browse/KAFKA-10183
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.4.0, 2.5.0
> Environment: Centos7.7
>Reporter: Liraz Sharaby
>Priority: Major
>
> Issue: Mirror maker creates a consumer-producer pair per server listed in 
> bootstrap.servers (mirrormaker config), resulting in duplicate messages in 
> target cluster.
> When specifying 3 bootstrap servers, target topic will have 3 times the 
> messages its source does.
> When specifying a single bootstrap server, only 1 consumer-producer pair is 
> created, and message count is identical in source and target topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4660

2020-06-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-9194: Update documentation for replica.fetch.min.bytes config


--
[...truncated 3.15 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache

Build failed in Jenkins: kafka-trunk-jdk14 #237

2020-06-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-9194: Update documentation for replica.fetch.min.bytes config


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.Topology

Re: [DISCUSS] KIP-431: Support of printing additional ConsumerRecord fields in DefaultMessageFormatter

2020-06-21 Thread Badai Aqrandista
Excellent.

Would like to hear more feedback from others.

On Sat, Jun 20, 2020 at 1:27 AM David Jacot  wrote:
>
> Hi Badai,
>
> Thanks for your reply.
>
> 2. Yes, that makes sense.
>
> Best,
> David
>
> On Thu, Jun 18, 2020 at 2:08 PM Badai Aqrandista  wrote:
>
> > David
> >
> > Thank you for replying
> >
> > 1. It seems that `print.partition` is already implemented. Do you confirm?
> > BADAI: Yes, you are correct. I have removed it from the KIP.
> >
> > 2. Will `null.literal` be only used when the value of the message
> > is NULL or for any fields? Also, it seems that we print out "null"
> > today when the key or the value is empty. Shall we use "null" as
> > a default instead of ""?
> > BADAI: For any fields. Do you think this is useful?
> >
> > 3. Could we add a small example of the output in the KIP?
> > BADAI: Yes, I have updated the KIP to add a couple of example.
> >
> > 4. When there are no headers, are we going to print something
> > to indicate it to the user? For instance, we print out NO_TIMESTAMP
> > where there is no timestamp.
> > BADAI: Yes, good idea. I have updated the KIP to print NO_HEADERS.
> >
> > Thanks
> > Badai
> >
> >
> > On Thu, Jun 18, 2020 at 7:25 PM David Jacot  wrote:
> > >
> > > Hi Badai,
> > >
> > > Thanks for resuming this. I have few small comments:
> > >
> > > 1. It seems that `print.partition` is already implemented. Do you
> > confirm?
> > >
> > > 2. Will `null.literal` be only used when the value of the message
> > > is NULL or for any fields? Also, it seems that we print out "null"
> > > today when the key or the value is empty. Shall we use "null" as
> > > a default instead of ""?
> > >
> > > 3. Could we add a small example of the output in the KIP?
> > >
> > > 4. When there are no headers, are we going to print something
> > > to indicate it to the user? For instance, we print out NO_TIMESTAMP
> > > where there is no timestamp.
> > >
> > > Best,
> > > David
> > >
> > > On Wed, Jun 17, 2020 at 4:53 PM Badai Aqrandista 
> > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I have contacted Mateusz separately and he is ok for me to take over
> > > > KIP-431:
> > > >
> > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-431%3A+Support+of+printing+additional+ConsumerRecord+fields+in+DefaultMessageFormatter
> > > >
> > > > I have updated it a bit. Can anyone give a quick look at it again and
> > > > give me some feedback?
> > > >
> > > > This feature will be very helpful for people supporting Kafka in
> > > > operations.
> > > >
> > > > If it is ready for a vote, please let me know.
> > > >
> > > > Thanks
> > > > Badai
> > > >
> > > > On Sat, Jun 13, 2020 at 10:59 PM Badai Aqrandista 
> > > > wrote:
> > > > >
> > > > > Mateusz
> > > > >
> > > > > This KIP would be very useful for debugging. But the last discussion
> > > > > is in Feb 2019.
> > > > >
> > > > > Are you ok if I take over this KIP?
> > > > >
> > > > > --
> > > > > Thanks,
> > > > > Badai
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks,
> > > > Badai
> > > >
> >
> >
> >
> > --
> > Thanks,
> > Badai
> >



-- 
Thanks,
Badai


Jenkins build is back to normal : kafka-trunk-jdk11 #1589

2020-06-21 Thread Apache Jenkins Server
See 




Re: Highwater mark interpretation

2020-06-21 Thread D C
The short answer is : yes, a consumer can only consume messages up to the
High Watermark.

The long answer is not exactly, for the following reasons:

At the partition level you have 3 major offsets that are important to the
health of the partition and accessibility from the consumer pov:
LeO (log end offset) - which represents the highest offset in the highest
segment
High Watermark - which represents the latest offset that has been
replicated to all the followers
LSO (Last stable offset) - which is important when you use producers that
create transactions - which represents the the highest offset that has been
committed by a transaction and that is allowed to be read with isolation
level = read_commited.

The LeO can only be higher or equal to the High Watermark (for obvious
reasons)
The High Watermark can only be higher or equal to the LSO (the messages up
to this point may have been committed to all the followers but the
transaction isn't yet finished)
And coming to your question, in case the transaction hasn't finished, the
LSO may be lower than the High Watermark so if your consumer is accessing
the data in Read_Committed, it won't be able to surpass the LSO.

Cheers,
D

On Sat, Jun 20, 2020 at 9:05 PM Nag Y  wrote:

> As I understand it, the consumer can only read "committed" messages - which
> I believe, if we look at internals of it, committed messages are nothing
> but messages which are upto the high watermark.
> *The high watermark is the offset of the last message that was successfully
> copied to all of the log’s replicas. *
>
> *Having said that, if one of the replica is down, will high water mark be*
> *advanced?*
>
> *If replica can't come forever, can we consider this message cant be
> consumed by the consumer since it is never committed *
>


Re: Highwater mark interpretation

2020-06-21 Thread D C
Hey Nag Y,

I’m not exactly sure if reducing the replication factor while a broker is
down would release the messages to be consumed (or at least not on all
partitions) for the simple fact that it might just remove the last replica
in the list which might not mach your unreachable broker.
Personally i would go and do a manual reassignment of partitions (kafka
manager allows you to do that in an easy visual environment) and move the
replicas out of the broken broker to a working one and once that’s done and
the data copied to the new broker the high watermark should go up as all
the replicas will be in sync.

Cheers,
D

On Sunday, June 21, 2020, Nag Y  wrote:

> Thanks D C. Thanks a lot . That is quite a detailed explanation.
> If I understand correctly, ( ignoring the case where producers
> create transactions) - since the replica is down and never comes , the high
> watermark CANNOT advance and the consumer CAN NOT read the messages which
> were sent after the replica is down as the message is NOT committed - Hope
> this is correct ?

——
Indeed, this is correct.
——

>
> To address this situation, either we should make sure the replica is up or
> reduce the replication factor so that the message will be committed and
> consumer can start reading the messages ...
>
> Regards,
>  Nag
>
>
> On Sun, Jun 21, 2020 at 3:25 AM D C  wrote:
>
> > The short answer is : yes, a consumer can only consume messages up to the
> > High Watermark.
> >
> > The long answer is not exactly, for the following reasons:
> >
> > At the partition level you have 3 major offsets that are important to the
> > health of the partition and accessibility from the consumer pov:
> > LeO (log end offset) - which represents the highest offset in the highest
> > segment
> > High Watermark - which represents the latest offset that has been
> > replicated to all the followers
> > LSO (Last stable offset) - which is important when you use producers that
> > create transactions - which represents the the highest offset that has
> been
> > committed by a transaction and that is allowed to be read with isolation
> > level = read_commited.
> >
> > The LeO can only be higher or equal to the High Watermark (for obvious
> > reasons)
> > The High Watermark can only be higher or equal to the LSO (the messages
> up
> > to this point may have been committed to all the followers but the
> > transaction isn't yet finished)
> > And coming to your question, in case the transaction hasn't finished, the
> > LSO may be lower than the High Watermark so if your consumer is accessing
> > the data in Read_Committed, it won't be able to surpass the LSO.
> >
> > Cheers,
> > D
> >
> > On Sat, Jun 20, 2020 at 9:05 PM Nag Y 
> wrote:
> >
> > > As I understand it, the consumer can only read "committed" messages -
> > which
> > > I believe, if we look at internals of it, committed messages are
> nothing
> > > but messages which are upto the high watermark.
> > > *The high watermark is the offset of the last message that was
> > successfully
> > > copied to all of the log’s replicas. *
> > >
> > > *Having said that, if one of the replica is down, will high water mark
> > be*
> > > *advanced?*
> > >
> > > *If replica can't come forever, can we consider this message cant be
> > > consumed by the consumer since it is never committed *
> > >
> >
>


--


Re: First time patch submitter advice

2020-06-21 Thread Michael Carter
Thanks Bruno

> On 19 Jun 2020, at 6:48 pm, Bruno Cadonna  wrote:
> 
> I meant "Hi Michael" not Luke.
> Sorry Michael and Luke.
> 
> Best,
> Bruno
> 
> On Fri, Jun 19, 2020 at 10:47 AM Bruno Cadonna  wrote:
>> 
>> Hi Luke,
>> 
>> The guide is a bit outdated. Thank you for pointing it out. I updated the 
>> guide.
>> 
>> As Gwen stated above:
>> 
>>> Unfortunately, you need to get a committer to approve running the tests.
>> 
>> So, yes a committer has to comment on the PR.
>> 
>> Best,
>> Bruno
>> 
>> On Fri, Jun 19, 2020 at 1:28 AM Michael Carter
>>  wrote:
>>> 
>>> Hi Gwen and Luke,
>>> 
>>> Sorry, I’ve probably misunderstood something again. Since KAFKA-10155 and 
>>> KAFKA-10147 have now been resolved, I merged the trunk back into my branch 
>>> and added to comment “retest this please” to my pull request 
>>> (https://github.com/apache/kafka/pull/8844 
>>> ) like the contributing 
>>> guidelines state. Unfortunately,  no tests seem to have been run.
>>> Does a committer have to comment on the PR instead?
>>> 
>>> Thanks,
>>> Michael
>>> 
 On 16 Jun 2020, at 9:33 am, Michael Carter 
  wrote:
 
 Great, thanks Luke.
 I’ve undone the patch and added that comment.
 
 Cheers,
 Michael
 
> On 15 Jun 2020, at 6:07 pm, Luke Chen  wrote:
> 
> Hi Michael,
> The failed unit test has already handled here:
> https://issues.apache.org/jira/browse/KAFKA-10155
> https://issues.apache.org/jira/browse/KAFKA-10147
> 
> So, maybe you can ignore the test errors and mention the issue number in 
> PR.
> Thanks.
> 
> Luke
> 
> On Mon, Jun 15, 2020 at 3:23 PM Michael Carter <
> michael.car...@instaclustr.com> wrote:
> 
>> Thanks for the response Gwen, that clarifies things for me.
>> 
>> Regarding the unit test (ReassignPartitionsUnitTest.
>> testModifyBrokerThrottles),  it appears to fail quite reliably on trunk 
>> as
>> well (at least on my machine).
>> It looks to me like a new override to
>> MockAdminClient.describeConfigs(Collection resources)
>> (MockAdminClient.java line 369) introduced in commit
>> 48b56e533b3ff22ae0e2cf7fcc649e7df19f2b06 changed the behaviour of this
>> method that the unit test relied on.
>> I’ve just now put a patch into my branch to make that test pass by 
>> calling
>> a slightly different version of describeConfigs (that avoids the 
>> overridden
>> behaviour). It’s probably arguable whether that constitutes a fix or not
>> though.
>> 
>> Cheers,
>> Michael
>> 
>>> On 15 Jun 2020, at 3:41 pm, Gwen Shapira  wrote:
>>> 
>>> Hi,
>>> 
>>> 1. Unfortunately, you need to get a committer to approve running the
>> tests.
>>> I just gave the green-light on your PR.
>>> 2. You can hope that committers will see your PR, but sometimes things
>> get
>>> lost. If you know someone who is familiar with that area of the code, it
>> is
>>> a good idea to ping them.
>>> 3. We do have some flaky tests. You can see that Jenkins will run 3
>>> parallel builds, if some of them pass and the committer confirms that
>>> failures are not related to your code, we are ok to merge. Obviously, if
>>> you end up tracking them down and fixing, everyone will be very 
>>> grateful.
>>> 
>>> Hope this helps,
>>> 
>>> Gwen
>>> 
>>> On Sun, Jun 14, 2020 at 5:52 PM Michael Carter <
>>> michael.car...@instaclustr.com> wrote:
>>> 
 Hi all,
 
 I’ve submitted a patch for the first time(
 https://github.com/apache/kafka/pull/8844 <
 https://github.com/apache/kafka/pull/8844>), and I have a couple of
 questions that I’m hoping someone can help me answer.
 
 I’m a little unclear what happens after that patch has been submitted.
>> The
 coding guidelines say Jenkins will run tests automatically, but I don’t
>> see
 any results anywhere. Have I misunderstood what should happen, or do I
>> just
 not know where to look?
 Should I be attempting to find reviewers for the change myself, or is
>> that
 done independently of the patch submitter?
 
 Also, in resolving a couple of conflicts that have arisen after the
>> patch
 was first submitted, I noticed that there are now failing unit tests
>> that
 have nothing to do with my change. Is there a convention on how to deal
 with these? Should it be something that I try to fix on my branch?
 
 Any thoughts are appreciated.
 
 Thanks,
 Michael
>>> 
>>> 
>>> 
>>> --
>>> Gwen Shapira
>>> Engineering Manager | Confluent
>>> 650.450.2760 | @gwenshap
>>> Follow us: Twitter | blog
>> 
>> 
 
>>>