Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #863

2022-04-14 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-13653) Proactively discover alive brokers from bootstrap server lists when all nodes are down

2022-04-14 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-13653.
---
Resolution: Duplicate

> Proactively discover alive brokers from bootstrap server lists when all nodes 
> are down
> --
>
> Key: KAFKA-13653
> URL: https://issues.apache.org/jira/browse/KAFKA-13653
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 3.1.0
>Reporter: Luke Chen
>Priority: Major
>
> Currently, client metadata update has 2 situations:
>  # partition leader change
>  # metadata expired (default 5 mins)
> But sometimes, we will start the client and the brokers at the same time. The 
> client might discover only partial of the brokers at first. And when the 
> discovered brokers down accidentally within 5 mins (before metadata expired), 
> there would be no chance to update the metadata and all cluster is down.
> Ex:
> 1. brokerA is up
> 2. producer is up, discovered brokerA, update its metadata
> 3. brokerB, brokerC are up (but producer doesn't know, and leader imbalance 
> check is not expired (5 mins default))
> 4. producer keeps producing data without error
> 5. brokerA down, let's say, in 3 mins after producer started
> 6. Now, all cluster won't work even though brokerB and brokerC are up
>  
> We should proactively discover active brokers when there are no nodes to 
> connect via the bootstrap server config. So, in the above example, if the 
> bootstrap.server is set to "brokerA_IP,brokerB_IP,brokerC_IP", then we should 
> be able to discover the brokerB and brokerC after step 6.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #862

2022-04-14 Thread Apache Jenkins Server
See 




Re: maxParallelForks while running tests

2022-04-14 Thread Luke Chen
Hi Unmesh,

Are you running into any issue with that?

So far, the `maxParallelForks` can be set via gradle argument:
https://github.com/apache/kafka/blob/trunk/build.gradle#L78

And in Jenkins, it looks like we default to 2.
https://github.com/apache/kafka/blob/trunk/Jenkinsfile#L40

Thank you.
Luke

On Fri, Apr 15, 2022 at 1:24 AM Unmesh Joshi  wrote:

> Hi,
> I came across this issue which has discussion about capping
> maxParallelForks while running tests.
> https://issues.apache.org/jira/browse/KAFKA-2613
> Is this still the case?
>
> Thanks,
> Unmesh
>


[jira] [Resolved] (KAFKA-13242) KRaft Controller doesn't handle UpdateFeaturesRequest

2022-04-14 Thread dengziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengziming resolved KAFKA-13242.

Resolution: Fixed

> KRaft Controller doesn't handle UpdateFeaturesRequest
> -
>
> Key: KAFKA-13242
> URL: https://issues.apache.org/jira/browse/KAFKA-13242
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: dengziming
>Assignee: dengziming
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] KIP-825: introduce a new API to control when aggregated results are produced

2022-04-14 Thread Guozhang Wang
Hello Hao,

Just a bit further suggestion to align with KIP-444 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams):
in the suppression processor node we have an existing metric as follows:

suppression-emit (rate | total)


So I'm feeling maybe we can name it more explicitly as
window-aggregate-final-emit-(rate | total) and
window-aggregate-final-emit-latency-(avg | max). WDYT?

Also I'm wondering how the latency is measured in the metrics? Is that
measured as the time difference between "when a final result record COULD
be emitted" and "when that final result record is actually emitted"?
Anyways, maybe it's better to elaborate clearly on how the latency is
measured.


Guozhang

On Tue, Apr 12, 2022 at 9:50 AM Hao Li  wrote:

> Thanks for the feedback Bruno and John.
>
> 1. Proposed name by John sounds good to me!
> 2. I will use processor-node metrics with debug level since this doesn't
> seem like top level metrics and make more sense to not mix them in task.
> 3. Will update KIP as Bruno suggested.
>
> Thanks,
> Hao
>
> On Tue, Apr 12, 2022 at 1:30 AM Bruno Cadonna  wrote:
>
> > Hi Hao,
> >
> > Thanks for the addition!
> >
> > I second what John said about the naming.
> >
> > Could you please describe the metrics as has been previously done in
> > KIP-471, KIP-613, or KIP-761?
> > That would make the metrics more concise and clear. In addition,
> > TaskMetrics is an internal class that is an implementation detail and
> > hence not intended to be shown in a KIP.
> >
> > Best,
> > Bruno
> >
> > On 12.04.22 06:03, John Roesler wrote:
> > > Thanks, Hao!
> > >
> > > I have no concern about amending the KIP to add metrics.
> > > Thanks for thinking of it.
> > >
> > > Can you comment on the choice to add them as a task-level metric
> > > instead of a processor-level metric? This will cause the metrics for
> > > all windowed aggregations in a task that use final emission to be
> > > mixed together. It might be fine, but we should at least document
> > > that it was anticipated and the reasons for the choice. By the way,
> > > if we do add them as processor-node metrics but want them to
> > > be measured at info level, we should also state it, since processor-
> > > node metrics are usually debug.
> > >
> > > Also, I'm concerned that the name `emitted-records` will be
> > > ambiguous in the larger context of all Kafka Streams metrics. If I'm
> > > right in thinking that these metrics are only for measuring the
> > > behavior of emit-final windowed aggregations, then we should
> > > make sure that the metric name says as much. Maybe:
> > >
> > > emit-final-records-[rate|total]
> > > emit-final-latency-[avg|max]
> > >
> > > Thanks!
> > > -John
> > >
> > > On Mon, Apr 11, 2022, at 14:25, Hao Li wrote:
> > >> Hi all,
> > >>
> > >> I would like to introduce two metrics in this KIP as well to measure
> the
> > >> latency and number of records emitted for emit final. They are named:
> > >>
> > >> `emit-final-latency`
> > >> `emitted-records`
> > >>
> > >> I've updated the KIP with details in
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-825%3A+introduce+a+new+API+to+control+when+aggregated+results+are+produced
> > >>
> > >> Can you take a look and see if you have any concerns for the metrics?
> > >>
> > >> Thanks,
> > >> Hao
> > >>
> > >> On Fri, Mar 25, 2022 at 8:27 AM Hao Li  wrote:
> > >>
> > >>> Got it. Forgot that. Yeah, it’s still open and ppl can still vote.
> > Thanks
> > >>> for reminding!
> > >>>
> > >>> Hao Li
> > >>>
> >  On Mar 25, 2022, at 8:22 AM, Guozhang Wang 
> > wrote:
> > 
> >  Hello Hao,
> > 
> >  According to bylaws the voting has to last for at least 72 business
> > >>> hours.
> >  So let's wait a bit longer to see if there are different opinions
> > before
> >  calling it close.
> > 
> > > On Thu, Mar 24, 2022 at 4:20 PM Hao Li 
> > >>> wrote:
> > >
> > > The vote happened in the discussion thread since I started the vote
> > >>> there
> > > by mistake. But it passed there. To avoid having everyone vote
> > again. I
> > > copied the content from that thread here:
> > >
> > >  end of discussion thread vote
> > > ==
> > > The vote passed with 5 binding votes from John, Guozhang, Bruno,
> > >>> Matthias
> > > and Bill.
> > >
> > > Thanks all for the feedback and vote.
> > >
> > > Hao
> > >
> > >> On Thu, Mar 24, 2022 at 2:20 PM Bill Bejeck 
> > wrote:
> > >>
> > >> Thanks for KIP Hao!
> > >>
> > >> Glad to see we settled on option 1
> > >>
> > >> +1(binding)
> > >>
> > >> On Thu, Mar 24, 2022 at 5:13 PM Matthias J. Sax  >
> > > wrote:
> > >>
> > >>> +1 (binding)
> > >>>
> > >>>
> > >>> On 3/24/22 1:52 PM, Hao Li wrote:
> >  I hit reply on my phone in the mail app and changed the title
> and
> > > text
> >  

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.2 #30

2022-04-14 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #861

2022-04-14 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #860

2022-04-14 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13830) Introduce metadata.version in KRaft

2022-04-14 Thread David Arthur (Jira)
David Arthur created KAFKA-13830:


 Summary: Introduce metadata.version in KRaft
 Key: KAFKA-13830
 URL: https://issues.apache.org/jira/browse/KAFKA-13830
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Arthur
Assignee: David Arthur
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-13823) Remove "max" version level from finalized features

2022-04-14 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur resolved KAFKA-13823.
--
Fix Version/s: 3.3.0
   Resolution: Fixed

> Remove "max" version level from finalized features
> --
>
> Key: KAFKA-13823
> URL: https://issues.apache.org/jira/browse/KAFKA-13823
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Major
> Fix For: 3.3.0
>
>
> As specified in KIP-778, we need to remove the finalized max version from the 
> feature flags code. Even though we don't have any usages in the ZK 
> controller, we need to make the changes there to stay consistent with the 
> KRaft implementation. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


maxParallelForks while running tests

2022-04-14 Thread Unmesh Joshi
Hi,
I came across this issue which has discussion about capping
maxParallelForks while running tests.
https://issues.apache.org/jira/browse/KAFKA-2613
Is this still the case?

Thanks,
Unmesh


[GitHub] [kafka-site] bbejeck commented on pull request #404: Add atruvia to powered-by

2022-04-14 Thread GitBox


bbejeck commented on PR #404:
URL: https://github.com/apache/kafka-site/pull/404#issuecomment-1099286874

   @PhilippB21 thanks for the addition to the powered-by page


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] bbejeck merged pull request #404: Add atruvia to powered-by

2022-04-14 Thread GitBox


bbejeck merged PR #404:
URL: https://github.com/apache/kafka-site/pull/404


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [DISCUSS] KIP-613: Add end-to-end latency metrics to Streams

2022-04-14 Thread Chaimaa LOTFI
CONFIDENTIAL & RESTRICTED


Hello, I hope that you are doing well.
I have a quick question please, I created a Kafka streams dashboard which works 
well, expect that we got a negative latency for record end2end latency with 
this metric 
(kafka_streams_stream_processor_node_metrics_record_e2e_latency_min), I was 
looking why, while I saw in Thanos that it gives either NaN or a negative 
value! so I am still looking for that. Do you have any idea why?
Like we can't get a negative latency anyways, so what does this metric 
calculate exactly?
Thank you in advance!
Best regards

On 2020/05/13 02:27:54 Sophie Blee-Goldman wrote:
> Hey all,
>
> I'd like to kick off discussion on KIP-613 which aims to add end-to-end
> latency metrics to Streams. Please take a look:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-613%3A+Add+end-to-end+latency+metrics+to+Streams
>
> Cheers,
> Sophie
>


Re: [VOTE] KIP-813 Shared State Stores

2022-04-14 Thread John Roesler
Hey Daan,

The final step in the KIP is to declare the KIP accepted like this:

The KIP-813 vote has passed with:

binding +1s (John, Matthias, Bill)
non-binding +1s (Daan, Federico)

Thanks,
John

(PS: can you also update the wiki to say the Kip is accepted?)

On Tue, Apr 12, 2022, at 02:51, Daan Gertis wrote:
> Cool! I’ll move forward and start preparing the PR for it.
>
> Cheers!
> D.
>
> From: Bill Bejeck 
> Date: Tuesday, 5 April 2022 at 18:17
> To: dev 
> Subject: Re: [VOTE] KIP-813 Shared State Stores
> Thanks for the KIP, Daan.
>
> I've caught up on the discussion thread and I've gone over the KIP.  This
> seems like a good addition to me.
>
> +1 (binding)
>
> Thanks,
> Bill
>
> On Fri, Apr 1, 2022 at 2:13 PM Matthias J. Sax  wrote:
>
>> +1 (binding)
>>
>>
>> On 4/1/22 6:47 AM, John Roesler wrote:
>> > Thanks for the KIP, Daan!
>> >
>> > I’m +1 (binding)
>> >
>> > -John
>> >
>> > On Tue, Mar 29, 2022, at 06:01, Daan Gertis wrote:
>> >> I would like to start a vote on this one:
>> >>
>> >>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-813%3A+Shareable+State+Stores
>> >>
>> >> Cheers,
>> >> D.
>>


[GitHub] [kafka-site] cadonna merged pull request #405: Add Bruno's public key to KEYS

2022-04-14 Thread GitBox


cadonna merged PR #405:
URL: https://github.com/apache/kafka-site/pull/405


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #859

2022-04-14 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-797: Accept duplicate listener on port for IPv4/IPv6

2022-04-14 Thread Matthew de Detrich
Hi David,

Thanks for the response.

> 1. In the public interface section, could we spell out
the configurations that we are changing with this
KIP? The name does not change but the semantic is
so it is good to be clear.

Done

> 2. In the proposed changes section, I would rather
mention the configuration that we need to change the
validation for instead of saying "loosening the validation
on listenerListToEndPoints in kafka.utils.CoreUtils.scala"
as this is specific to the implementation.

This is already done with examples later down in the same section, or am I
missing something? Would you like me to just remove the
kafka.utils.CoreUtils.scala reference so its not implying an implementation
detail?

> 3. For my understanding, using the same port with two
different DNS entries would fail, right? e.g.
"PLAINTEXT://foo:9092,PLAINTEXT://bar:9092"

Correct, the idea is that it checks that the listener host is an IP address
and if it's not then it doesn't even consider it (i.e. it short circuits to
what is current behaviour). The proposed KIP changes only apply if
hostnames in the listener are IP address's otherwise no change is
observable.

Regards

On Mon, Feb 21, 2022 at 10:42 AM David Jacot 
wrote:

> Hi Matthew,
>
> Thanks for the KIP. I have a few minor comments:
>
> 1. In the public interface section, could we spell out
> the configurations that we are changing with this
> KIP? The name does not change but the semantic is
> so it is good to be clear.
>
> 2. In the proposed changes section, I would rather
> mention the configuration that we need to change the
> validation for instead of saying "loosening the validation
> on listenerListToEndPoints in kafka.utils.CoreUtils.scala"
> as this is specific to the implementation.
>
> 3. For my understanding, using the same port with two
> different DNS entries would fail, right? e.g.
> "PLAINTEXT://foo:9092,PLAINTEXT://bar:9092"
>
> Best,
> David
>
> On Fri, Feb 11, 2022 at 10:35 AM Luke Chen  wrote:
> >
> > Hi Matthew,
> >
> > Thanks for the update.
> > I'm +1 (binding)
> >
> > Thank you.
> > Luke
> >
> > On Fri, Feb 11, 2022 at 3:32 PM Matthew de Detrich
> >  wrote:
> >
> > > Hi Luke,
> > >
> > > I have just updated the KIP with the changes you requested.
> > >
> > > Regards
> > >
> > > On Fri, Feb 11, 2022 at 4:47 AM Luke Chen  wrote:
> > >
> > > > Hi Matthew,
> > > >
> > > > I checked again the KIP, and it LGTM.
> > > >
> > > > Just a minor comment:
> > > > Maybe add some examples into the KIP to show how users can set both
> IPv4
> > > > and IPv6 on the same port.
> > > > And some examples to show how the validation will fail like you
> listed in
> > > > `Proposed Changes`.
> > > >
> > > > Thank you.
> > > > Luke
> > > >
> > > >
> > > > On Fri, Feb 11, 2022 at 8:54 AM Matthew de Detrich
> > > >  wrote:
> > > >
> > > > > Hello everyone
> > > > >
> > > > > I have just updated/rebased the PR against the latest Kafka trunk.
> Let
> > > me
> > > > > know if anything else is required/missing.
> > > > >
> > > > > Regards
> > > > >
> > > > > On Thu, Jan 13, 2022 at 10:28 AM Matthew de Detrich <
> > > > > matthew.dedetr...@aiven.io> wrote:
> > > > >
> > > > > > Does anyone have any additional comments/regards to help get
> this PR
> > > > > voted
> > > > > > through?
> > > > > >
> > > > > > On Tue, Nov 23, 2021 at 7:46 AM Josep Prat
> > >  > > > >
> > > > > > wrote:
> > > > > >
> > > > > >> Hi Matthew,
> > > > > >>
> > > > > >> Thank you for the PR.
> > > > > >>
> > > > > >> +1 (non binding) from my side.
> > > > > >>
> > > > > >>
> > > > > >> Best,
> > > > > >>
> > > > > >> ———
> > > > > >> Josep Prat
> > > > > >>
> > > > > >> Aiven Deutschland GmbH
> > > > > >>
> > > > > >> Immanuelkirchstraße 26, 10405 Berlin
> > > > > >>
> > > > > >> Amtsgericht Charlottenburg, HRB 209739 B
> > > > > >>
> > > > > >> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > > > > >>
> > > > > >> m: +491715557497
> > > > > >>
> > > > > >> w: aiven.io
> > > > > >>
> > > > > >> e: josep.p...@aiven.io
> > > > > >>
> > > > > >> On Tue, Nov 23, 2021, 07:11 Ivan Yurchenko <
> > > ivan0yurche...@gmail.com>
> > > > > >> wrote:
> > > > > >>
> > > > > >> > Hi,
> > > > > >> >
> > > > > >> > Thank you for the KIP.
> > > > > >> >
> > > > > >> > +1 (non-binding)
> > > > > >> >
> > > > > >> > Ivan
> > > > > >> >
> > > > > >> >
> > > > > >> > On Tue, 23 Nov 2021 at 04:18, Luke Chen 
> > > wrote:
> > > > > >> >
> > > > > >> > > Hi Matthew,
> > > > > >> > > Thanks for the KIP.
> > > > > >> > > It makes sense to allow IPv4 and IPv6 listening on the same
> port
> > > > for
> > > > > >> the
> > > > > >> > > listener config.
> > > > > >> > >
> > > > > >> > > +1 (non-binding)
> > > > > >> > >
> > > > > >> > > Thank you.
> > > > > >> > > Luke
> > > > > >> > >
> > > > > >> > > On Mon, Nov 22, 2021 at 6:28 PM Matthew de Detrich
> > > > > >> > >  wrote:
> > > > > >> > >
> > > > > >> > > > Hello everyone,
> > > > > >> > > >
> > > > > >> > > > I would like to start a vote for KIP-797: Accept duplicate
> > > > 

[jira] [Resolved] (KAFKA-12613) Inconsistencies between Kafka Config and Log Config

2022-04-14 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12613.

Fix Version/s: 3.3.0
   Resolution: Fixed

> Inconsistencies between Kafka Config and Log Config
> ---
>
> Key: KAFKA-12613
> URL: https://issues.apache.org/jira/browse/KAFKA-12613
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: 20210404-161832.png
>
>
> I found this problem while investigating KAFKA-8926.
> Some broker-wide configurations (defined in KafkaConfig) are mapped with 
> log-wide configurations (defined in LogConfig), providing a default value. 
> You can find the complete mapping list in `LogConfig.TopicConfigSynonyms`.
> The problem is, *some configuration properties' validation is different 
> between KafkaConfig and LogConfig*:
> !20210404-161832.png!
> These inconsistencies cause some problems with the dynamic configuration 
> feature. When a user dynamically configures the broker configuration with 
> `AdminClient#alterConfigs`, the submitted config is validated with 
> KafkaConfig, which lacks some validation logic - as a result, they bypasses 
> the correct validation.
> For example, a user can set `log.cleaner.min.cleanable.ratio` to -0.5 - which 
> is obviously prohibited in LogConfig.
>  * I could not reproduce the situation KAFKA-8926 describes, but fixing this 
> problem also resolves KAFKA-8926.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2022-04-14 Thread Jorge Esteban Quilcate Otoya
Yet another quick FYI.

While implementing KIP-820, we found that `api.MockProcessorContext` was
missing these new methods as well.
We added the new method to the new `api.MockProcessorContext` via
https://issues.apache.org/jira/browse/KAFKA-13654.

Please let us know if there are any concerns.

I updated the KIP accordingly.

Cheers,
Jorge

On Tue, 15 Mar 2022 at 23:13, Matthias J. Sax  wrote:

> Just a quick FYI.
>
> KIP-622 overlapped with KIP-478.
>
> We added the new method to the new `api.ProcessorContext` via
> https://issues.apache.org/jira/browse/KAFKA-13699 for 3.2.0 release.
>
> Please let us know if there are any concerns.
>
> I updated the KIP accordingly.
>
> -Matthias
>
> On 3/5/21 8:42 PM, Rohit Deshpande wrote:
> > Hello all,
> > Based on the feedback of the pr <
> https://github.com/apache/kafka/pull/9744>
> > https://github.com/apache/kafka/pull/9744, there are following changes
> done
> > to the kip
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-622%3A+Add+currentSystemTimeMs+and+currentStreamTimeMs+to+ProcessorContext
> >
> > .
> >
> > *ProcessorContext#currentSystemTimeMs()*
> >
> > It is expected that this method will return the internally cached system
> > timestamp from the Kafka Stream runtime. Thus, it may return a different
> > value compared to System.currentTimeMillis(). The cached system time
> > represents the time when we start processing / punctuating, and it would
> > not change throughout the process / punctuate. So this method will return
> > current system time (also called wall-clock time) known from kafka
> streams
> > runtime.
> >
> > New methods to MockProcessorContext for testing purposes:
> >
> > *MockProcessorContext#setRecordTimestamp*: set record timestamp
> >
> > *MockProcessorContext#setCurrentSystemTimeMs:* set system timestamp
> >
> > *MockProcessorContext#setCurrentStreamTimeMs*: set stream time
> >
> > Deprecate method: MockProcessorContext#setTimestamp as it's name is
> > misleading and we are adding a new method
> >   MockProcessorContext#setRecordTimestamp which does the same work.
> >
> > Please let me know if you have any thoughts or concerns with this change.
> >
> > Thanks,
> > Roohit
> >
> > On Fri, Dec 4, 2020 at 7:31 PM Rohit Deshpande 
> > wrote:
> >
> >> Hello all,
> >> I am closing the vote for this KIP:
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-622%3A+Add+currentSystemTimeMs+and+currentStreamTimeMs+to+ProcessorContext
> >>
> >> Summary of the KIP:
> >> Planning to add two new methods to ProcessorContext:
> >> 1. long currentSystemTimeMs() to fetch wall-clock time
> >> 2. long currentStreamTimeMs() to fetch maximum timestamp of any record
> yet
> >> processed by the task
> >>
> >> Thanks,
> >> Rohit
> >>
> >>
> >> On 2020/12/01 16:09:54, Bill Bejeck  wrote:
> >>> Sorry for jumping into this so late,
> >>>
> >>> Thanks for the KIP, I'm a +1 (binding)
> >>>
> >>> -Bill
> >>>
> >>> On Sun, Jul 26, 2020 at 11:06 AM John Roesler 
> wrote:
> >>>
>  Thanks William,
> 
>  I’m +1 (binding)
> 
>  Thanks,
>  John
> 
>  On Fri, Jul 24, 2020, at 20:22, Sophie Blee-Goldman wrote:
> > Thanks all, +1 (non-binding)
> >
> > Cheers,
> > Sophie
> >
> > On Wed, Jul 8, 2020 at 4:02 AM Bruno Cadonna 
> >> wrote:
> >
> >> Thanks Will and Piotr,
> >>
> >> +1 (non-binding)
> >>
> >> Best,
> >> Bruno
> >>
> >> On Wed, Jul 8, 2020 at 8:12 AM Matthias J. Sax 
>  wrote:
> >>>
> >>> Thanks for the KIP.
> >>>
> >>> +1 (binding)
> >>>
> >>>
> >>> -Matthias
> >>>
> >>> On 7/7/20 11:48 AM, William Bottrell wrote:
>  Hi everyone,
> 
>  I'd like to start a vote for adding two new time API's to
> >> ProcessorContext.
> 
>  Add currentSystemTimeMs and currentStreamTimeMs to
> >> ProcessorContext
>  <
> >>
> 
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-622%3A+Add+currentSystemTimeMs+and+currentStreamTimeMs+to+ProcessorContext
> >>>
> 
>    Thanks everyone for the initial feedback and thanks for your
> >> time.
> 
> >>>
> >>
> >
> 
> >>>
> >>
> >
>


[GitHub] [kafka-site] cadonna opened a new pull request, #405: Add Bruno's public key to KEYS

2022-04-14 Thread GitBox


cadonna opened a new pull request, #405:
URL: https://github.com/apache/kafka-site/pull/405

   Adds Bruno Cadonna's public key to KEYS which is needed to release Apache 
Kafka 3.2.0.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-13829) The function of max.in.flight.requests.per.connection parameter does not work, it conflicts with the underlying NIO sending data

2022-04-14 Thread RivenSun (Jira)
RivenSun created KAFKA-13829:


 Summary: The function of max.in.flight.requests.per.connection 
parameter does not work, it conflicts with the underlying NIO sending data
 Key: KAFKA-13829
 URL: https://issues.apache.org/jira/browse/KAFKA-13829
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: RivenSun


Due to the implementation mechanism of the `OP_WRITE` event of Kafka's 
underlying NIO, the function of the `max.in.flight.requests.per.connection` 
parameter does not work. This will greatly affect Kafka's network sending 
performance.


The process of Kafka's Selector sending ClientRequest can be simply divided 
into the following two major steps:
h2. 
1) Prepare the request data to be sent


1. NetworkClient.ready(Node node, long now) -> 
NetworkClient.canSendRequest(...) method is called: determine whether the 
request is eligible to be sent.

2.NetworkClient.doSend(...): Construct `Send send` and cache the request by 
inFlightRequests.add(inFlightRequest).

3. Execute the `KafkaChannel.setSend()` method:
Judging that `this.send` *must be null* at present, otherwise {*}an 
IllegalStateException is thrown{*}; 
Set `this.send` value;
transportLayer adds `OP_WRITE` event.

 
h2. 2) Selector.poll(long timeout) should then be called to consume the send 
and send data to the network.

 
{code:java}
Selector.poll() -> this.nioSelector.select(timeout)
Selector.pollSelectionKeys() -> 
Selector.attemptWrite() -> 
Selector.write(channel) -> 
KafkaChannel.write() & KafkaChannel.maybeCompleteSend(){code}
 

1.Selector.poll -> this.nioSelector.select(timeout) : Get the previously 
registered `OP_WRITE` event

2. Execute the `Selector.attemptWrite()` method

3. In the `KafkaChannel.write()` method, after the data is successfully sent, 
update `this.send` to the completed state.

4. In the `KafkaChannel.maybeCompleteSend()` method, check whether send is in 
the completed state, otherwise do nothing.

5. If send is completed, transportLayer removes the `OP_WRITE` event and resets 
the KafkaChannel.send object to null.

6. Wait for the next request data that is ready to be sent.

 

It seems that there is no problem with the whole process above, but carefully 
read the method of NetworkClient.canSendRequest(...), there is such a condition 
in inFlightRequests.canSendMore(node): 
{code:java}
queue.peekFirst().send.completed(){code}

Secondly, the `inFlightRequests.add(inFlightRequest)` method also calls 
*addFirst(request).*

 

Currently, *only one send object* is stored in KafkaChannel, not a sendObject 
{*}collection{*};
During OP_WRITE event registration and removal,  *only* *one* *send* *object*  
*will be sent* in the KafkaChannel.write() method and *only one send* *object*  
*will be completed* in the KafkaChannel.maybeCompleteSend() method.

So whether the clientRequest is eligible to be sent will {color:#FF}*only 
be limited by queue.peekFirst().send.completed()*{color}, the 
{color:#FF}*max.in.flight.requests.per.connection*{color} parameter will 
lose its effect, and {*}{color:#FF}the effect of setting greater than 1 is 
equivalent in 1{color}{*}.
h2. 
Suggest


Due to the KafkaClient architecture, we do not need to consider the concurrent 
execution of multiple threads of the `NetworkClient.poll` method.

1.NetworkClient.canSendRequest(...) removes the condition,: 
{code:java}
queue.peekFirst().send.completed(){code}
`max.in.flight.requests.per.connection` parameter will work again.



2. Before, in KafkaChannel.setSend(), register the `OP_WRITE` event, and in 
KafkaChannel.maybeCompleteSend(), remove the `OP_WRITE` event.
It may no longer be appropriate now. Because we want to send more than one send 
object data during the registration and removal of `OP_WRITE` events, it is 
recommended {*}not to repeatedly register and remove `OP_WRITE` events{*}, but 
choose to {*}register the `OP_WRITE` event in the 
`transportLayer.finishConnect()` method{*}:
{code:java}
key.interestOps(key.interestOps() & ~SelectionKey.OP_CONNECT | 
SelectionKey.OP_READ | SelectionKey.OP_WRITE);{code}

3. The KafkaChannel.setSend() method does not only cache *an* incoming send 
object, but the incoming send object is stored in the *sendCollection* 
structure.

4. When KafkaChannel.write() is executed, copy sendCollection to 
{*}midWriteSendCollection{*}, then clear sendCollection, and finally send all 
the data in {*}midWriteSendCollection{*}.

5. In the KafkaChannel.maybeCompleteSend() method, determine whether there is a 
completed state send in the midWriteSendCollection, and then remove all 
completedSends from the midWriteSendCollection, and return completedSends for 
adding into Selector.completedSends.

6. Selector.attemptRead(channel) method execution has preconditions: 
`{*}!hasCompletedReceive(channel){*}`, so the KafkaChannel.receive object does 
not need to be changed.

7. A