Thanks for your review, Joel and Dong.
I've updated the KIP according to Dong's last comments.
Cheers!
Lucas
On Tue, Oct 9, 2018 at 10:06 PM Dong Lin wrote:
> Hey Lucas,
>
> Thanks for the KIP. Looks good overall. +1
>
> I have two trivial comments which may be a bit useful to reader.
>
> -
Hey Lucas,
Thanks for the KIP. Looks good overall. +1
I have two trivial comments which may be a bit useful to reader.
- Can we include the default value for the new config in Public Interface
section? Typically the default value of the new config is an important part
of public interface and we
+1
Thanks for the updated KIP.
On Tue, Oct 9, 2018 at 3:28 PM Lucas Wang wrote:
> Thanks Jun, I've updated the KIP with the new names.
>
> Hi Joel, Becket, Dong, Ismael,
> Since you've reviewed this KIP in the past, can you please review it again?
> Thanks a lot!
>
> Lucas
>
> On Mon, Oct 8,
Thanks Jun, I've updated the KIP with the new names.
Hi Joel, Becket, Dong, Ismael,
Since you've reviewed this KIP in the past, can you please review it again?
Thanks a lot!
Lucas
On Mon, Oct 8, 2018 at 6:10 PM Jun Rao wrote:
> Hi, Lucas,
>
> Yes, the new names sound good to me.
>
> Thanks,
>
Hi, Lucas,
Yes, the new names sound good to me.
Thanks,
Jun
On Fri, Oct 5, 2018 at 1:12 PM, Lucas Wang wrote:
> Thanks for the suggestion, Ismael. I like it.
>
> Jun,
> I'm excited to get the +1, thanks a lot!
> Meanwhile what do you feel about renaming the metrics and config to
>
>
Thanks for the suggestion, Ismael. I like it.
Jun,
I'm excited to get the +1, thanks a lot!
Meanwhile what do you feel about renaming the metrics and config to
ControlPlaneRequestQueueSize
ControlPlaneNetworkProcessorIdlePercent
ControlPlaneRequestHandlerIdlePercent
Hi, Lucas,
Thanks for the updated KIP. About the name of the metric, since it's tagged
by type=RequestChannel, it's probably fine to use the proposed name. So, +1
for the KIP.
Jun
On Thu, Oct 4, 2018 at 10:31 AM, Lucas Wang wrote:
> Thanks Jun. I've changed the KIP with the suggested 2 step
Have we considered control plane if we think control by itself is
ambiguous? I agree with the original concern that "controller" may be
confusing for something that affects all brokers.
Ismael
On 4 Oct 2018 11:08 am, "Lucas Wang" wrote:
Thanks Jun. I've changed the KIP with the suggested 2
Thanks Jun. I've changed the KIP with the suggested 2 step upgrade.
Please take a look again when you have time.
Regards,
Lucas
On Thu, Oct 4, 2018 at 10:06 AM Jun Rao wrote:
> Hi, Lucas,
>
> 200. That's a valid concern. So, we can probably just keep the current
> name.
>
> 201. I am thinking
Hi, Lucas,
200. That's a valid concern. So, we can probably just keep the current name.
201. I am thinking that you would upgrade in the same way as changing
inter.broker.listener.name. This requires 2 rounds of rolling restart. In
the first round, we add the controller endpoint to the listeners
Thanks for the further comments, Jun.
200. Currently in the code base, we have the term of "ControlBatch" related
to
idempotent/transactional producing. Do you think it's a concern for reusing
the term "control"?
201. It's not clear to me how it would work by following the same strategy
for
Hi, Lucas,
Sorry for the delay. The updated wiki looks good to me overall. Just a
couple more minor comments.
200. kafka.network:name=ControllerRequestQueueSize,type=RequestChannel: The
name ControllerRequestQueueSize gives the impression that it's only for the
controller broker. Perhaps we can
Hi Jun,
Sorry to bother you again. Can you please take a look at the wiki again
when you have time?
Thanks a lot!
Lucas
On Wed, Sep 19, 2018 at 3:57 PM Lucas Wang wrote:
> Hi Jun,
>
> Thanks a lot for the detailed explanation.
> I've restored the wiki to a previous version that does not
Hi Jun,
Thanks a lot for the detailed explanation.
I've restored the wiki to a previous version that does not require config
changes,
and keeps the current behavior with the proposed changes turned off by
default.
I'd appreciate it if you can review it again.
Thanks!
Lucas
On Tue, Sep 18, 2018
Hi, Lucas,
When upgrading to a minor release, I think the expectation is that a user
wouldn't need to make any config changes, other than the usual
inter.broker.protocol. If we require other config changes during an
upgrade, then it's probably better to do that in a major release.
Regarding your
@Jun Rao
I made the recent config changes after thinking about the default behavior
for adopting this KIP.
I think there are basically two options:
1. By default, the behavior proposed in this KIP is turned off, and
operators can turn it
on by adding the "controller.listener.name" config and
To be clear, we can only remove configs in major new versions. Otherwise,
we can only deprecate them.
Ismael
On Mon, Sep 10, 2018 at 10:47 AM Jun Rao wrote:
> Hi, Lucas,
>
> For the network idlePct, your understanding is correct. Currently,
> networkIdlePct metric is calculated as the average
Hi, Lucas,
For the network idlePct, your understanding is correct. Currently,
networkIdlePct metric is calculated as the average of (1 - io-ratio) in the
selector of all network threads.
The metrics part looks good to me in the updated KIP.
I am not still not quite sure about the configs.
100.
@Jun Rao
One clarification, currently on the selector level, we have the
io-wait-ratio metric.
For the new controller *network* thread, we can use it directly for
IdlePct, instead of using 1- io-ratio,
so that the logic is similar to the current average IdlePct for network
threads. Is that
Hi, Lucas,
Thanks for the updated KIP.
For monitoring the network thread utilization for the control plane, we
already have the metric io-ratio at the selector level (idlePct is 1 -
io-ratio). So, we just need to give that selector a meaningful name.
For monitoring the io thread utilization for
Thanks Jun for your quick response. It looks like I forgot to click the
"Update" button, :)
It's updated now.
Regarding the idle ratio metrics for the additional threads, I discussed
with Joel,
and think they are not as useful, and I added our reasoning in the last
paragraph of the
"How are
Hi, Lucas,
Thanks for the reply. Have you actually updated the KIP? The wiki says that
it's last updated on Aug. 22. and some of the changes that you mentioned
(#1 and #3) are not there.
Also, regarding Joel's comment on network/request idle ratio metrics, could
you comment on whether they
Thanks for the comments, Jun.
1. I think the answer should be no, since the "inter.broker.listener.name"
are also used
for replication traffic, and merging these two types of request to the
single threaded tunnel
would defeat the purpose of this KIP and also hurt replication throughput.
So I
Hi, Lucas,
Sorry for the delay. The new proposal looks good to me overall. A few minor
comments below.
1. It's possible that listener.name.for.controller is set, but set to the
same value as inter.broker.listener.name. In that case, should we have a
single network thread and the request handling
Hi Jun and Joel,
The KIP writeup has changed by quite a bit since your +1.
Can you please take another review? Thanks a lot for your time!
Lucas
On Tue, Jul 17, 2018 at 10:33 AM, Joel Koshy wrote:
> +1 on the KIP.
>
> (I'm not sure we actually necessary to introduce the condition variables
>
+1 on the KIP.
(I'm not sure we actually necessary to introduce the condition variables
for the concern that Jun raised, but it's an implementation detail that we
can defer to a discussion in the PR.)
On Sat, Jul 14, 2018 at 10:45 PM, Lucas Wang wrote:
> Hi Jun,
>
> I agree by using the
Hi Jun,
I agree by using the conditional variables, there is no need to add such a
new config.
Also thanks for approving this KIP.
Lucas
Hi, Lucas,
For 3, it has an impact to "io.thread.poll.timeout.ms", which is part of
the public interface. If we optimize the implication, it seems that there
is no need for this new config. Other than that, the KIP looks good to me.
Thanks,
Jun
On Thu, Jul 12, 2018 at 10:08 PM, Lucas Wang
Hi Jun,
About 3, thanks for the clarification. I like your proposal in that it
avoids the delay for controller requests when the data request queue is
empty.
In comparison, the approach I described earlier is simpler to understand
and implement.
Between these two I actually like your suggested
Hi, Lucas,
2. Good point about not knowing the request type in memory pool. Looking at
the implementation. It seems that queued.max.request.bytes is orthogonal to
queued.max.requests. So, this seems fine.
3. The implementation that you suggested sounds good. It would be useful
not to
For #1, I agree that obtaining good default is not trivial. We can revisit
in the future.
For #2, the table is readable.
Thanks
On Tue, Jul 3, 2018 at 4:23 PM, Lucas Wang wrote:
> @Ted
> For #1, it's probably hard to predict M since it also depends on the
> hardware.
> I'm not sure how to use
@Ted
For #1, it's probably hard to predict M since it also depends on the
hardware.
I'm not sure how to use the suggested formula for the default value if we
don't know M.
Also TO is the default timeout we want to figure out, and the formula seems
to be recursive.
I'd suggest we stay with the
For #1, I don't know what would be good approximation for M.
Maybe use max((TO / 2) / N, M / N) as default value for poll timeout ?
For #2, I don't see the picture in email :-)
Can you use third party website ?
Thanks
On Mon, Jul 2, 2018 at 5:17 PM, Lucas Wang wrote:
> Hi Ted,
>
> 1. I'm
Hi Ted,
1. I'm neutral on making the poll timeout parameter configurable.
Mainly because as a config, it could be confusing for operators who try to
choose a value for it.
To understand the implication of this value better,
let's use TO to represent the timeout value under discussion,
M to
bq. which is hard coded to be 300 milliseconds
Have you considered making the duration configurable ?
The comparison at the end of your email seems to be copied where tabular
form is lost.
Do you mind posting that part again ?
Thanks
On Fri, Jun 29, 2018 at 4:53 PM, Lucas Wang wrote:
> Hi
Hi Jun,
Thanks for your comments.
1. I just replied in the discussion thread about the positive change this
KIP can still bring
if implemented on the latest trunk, which includes the async ZK operations
for KAFKA-5642.
The evaluation is done using an integration test.
In production, we have not
Hi, Lucas,
Thanks for the KIP. A few comments below.
1. As Eno mentioned in the discussion thread, I am wondering how much of
this is still an issue after KAFKA-5642. With that fix, the requests from
the controller to the brokers are batched in all the common cases. Have you
deployed Kafka 1.1?
+1
-Harsha
On Wed, Jun 20, 2018, at 5:15 AM, Thomas Crayford wrote:
> +1 (non-binding)
>
> On Tue, Jun 19, 2018 at 8:20 PM, Lucas Wang wrote:
>
> > Hi Jun, Ismael,
> >
> > Can you please take a look when you get a chance? Thanks!
> >
> > Lucas
> >
> > On Mon, Jun 18, 2018 at 1:47 PM, Ted Yu
+1 (non-binding)
On Tue, Jun 19, 2018 at 8:20 PM, Lucas Wang wrote:
> Hi Jun, Ismael,
>
> Can you please take a look when you get a chance? Thanks!
>
> Lucas
>
> On Mon, Jun 18, 2018 at 1:47 PM, Ted Yu wrote:
>
> > +1
> >
> > On Mon, Jun 18, 2018 at 1:04 PM, Lucas Wang
> wrote:
> >
> > > Hi
Hi Jun, Ismael,
Can you please take a look when you get a chance? Thanks!
Lucas
On Mon, Jun 18, 2018 at 1:47 PM, Ted Yu wrote:
> +1
>
> On Mon, Jun 18, 2018 at 1:04 PM, Lucas Wang wrote:
>
> > Hi All,
> >
> > I've addressed a couple of comments in the discussion thread for KIP-291,
> > and
>
+1
On Mon, Jun 18, 2018 at 1:04 PM, Lucas Wang wrote:
> Hi All,
>
> I've addressed a couple of comments in the discussion thread for KIP-291,
> and
> got no objections after making the changes. Therefore I would like to start
> the voting thread.
>
> KIP:
>
Hi All,
I've addressed a couple of comments in the discussion thread for KIP-291,
and
got no objections after making the changes. Therefore I would like to start
the voting thread.
KIP:
42 matches
Mail list logo