Re: [DISCUSS] KIP-794: Strictly Uniform Sticky Partitioner

2021-11-04 Thread Luke Chen
Hi Artem,
Thanks for the KIP! And thanks for reminding me to complete KIP-782, soon.
:)

Back to the KIP, I have some comments:
1. You proposed to have a new config: "partitioner.sticky.batch.size", but
I can't see how we're going to use it to make the partitioner better.
Please explain more in KIP (with an example will be better as suggestion
(4))
2. In the "Proposed change" section, you take an example to use
"ClassicDefaultPartitioner", is that referring to the current default
sticky partitioner? I think it'd better you name your proposed partition
with a different name for distinguish between the default one and new one.
(Although after implementation, we are going to just use the same name)
3. So, if my understanding is correct, you're going to have a "batch"
switch, and before the in-flight is full, it's disabled. Otherwise, we'll
enable it. Is that right? Sorry, I don't see any advantage of having this
batch switch. Could you explain more?
4. I think it should be more clear if you can have a clear real example in
the motivation section, to describe what issue we faced using current
sticky partitioner. And in proposed changes section, using the same
example, to describe more detail about how you fix this issue with your way.

Thank you.
Luke

On Fri, Nov 5, 2021 at 1:38 AM Artem Livshits
 wrote:

> Hello,
>
> This is the discussion thread for
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-794%3A+Strictly+Uniform+Sticky+Partitioner
> .
>
> The proposal is a bug fix for
> https://issues.apache.org/jira/browse/KAFKA-10888, but it does include a
> client config change, therefore we have a KIP to discuss.
>
> -Artem
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #4

2021-11-04 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 496588 lines...]
[2021-11-05T00:03:05.815Z] > Task :raft:testClasses UP-TO-DATE
[2021-11-05T00:03:05.815Z] > Task :connect:json:testJar
[2021-11-05T00:03:05.815Z] > Task :connect:json:testSrcJar
[2021-11-05T00:03:05.815Z] > Task :metadata:compileTestJava UP-TO-DATE
[2021-11-05T00:03:05.815Z] > Task :metadata:testClasses UP-TO-DATE
[2021-11-05T00:03:05.815Z] > Task :core:compileScala UP-TO-DATE
[2021-11-05T00:03:05.815Z] > Task :core:classes UP-TO-DATE
[2021-11-05T00:03:05.815Z] > Task :core:compileTestJava NO-SOURCE
[2021-11-05T00:03:06.761Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-11-05T00:03:06.761Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-11-05T00:03:06.761Z] 
[2021-11-05T00:03:06.761Z] > Task :streams:processMessages
[2021-11-05T00:03:06.761Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-11-05T00:03:06.761Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_3.1/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-11-05T00:03:06.761Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-11-05T00:03:06.761Z] 
[2021-11-05T00:03:06.761Z] > Task :streams:compileJava UP-TO-DATE
[2021-11-05T00:03:06.761Z] > Task :streams:classes UP-TO-DATE
[2021-11-05T00:03:06.762Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2021-11-05T00:03:06.762Z] > Task :core:compileTestScala UP-TO-DATE
[2021-11-05T00:03:06.762Z] > Task :core:testClasses UP-TO-DATE
[2021-11-05T00:03:06.762Z] > Task :streams:jar UP-TO-DATE
[2021-11-05T00:03:06.762Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-11-05T00:03:06.762Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-11-05T00:03:10.370Z] > Task :connect:api:javadoc
[2021-11-05T00:03:10.370Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-11-05T00:03:10.370Z] > Task :connect:api:jar UP-TO-DATE
[2021-11-05T00:03:10.370Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-11-05T00:03:10.370Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-11-05T00:03:10.370Z] > Task :connect:json:jar UP-TO-DATE
[2021-11-05T00:03:10.370Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-11-05T00:03:10.370Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-11-05T00:03:10.370Z] > Task :connect:api:javadocJar
[2021-11-05T00:03:10.370Z] > Task :connect:json:publishToMavenLocal
[2021-11-05T00:03:10.370Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-11-05T00:03:10.370Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-11-05T00:03:10.370Z] > Task :connect:api:testJar
[2021-11-05T00:03:10.370Z] > Task :connect:api:testSrcJar
[2021-11-05T00:03:11.315Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-11-05T00:03:11.315Z] > Task :connect:api:publishToMavenLocal
[2021-11-05T00:03:14.927Z] > Task :streams:javadoc
[2021-11-05T00:03:14.927Z] > Task :streams:javadocJar
[2021-11-05T00:03:14.927Z] > Task :streams:compileTestJava UP-TO-DATE
[2021-11-05T00:03:14.927Z] > Task :streams:testClasses UP-TO-DATE
[2021-11-05T00:03:15.874Z] > Task :streams:testJar
[2021-11-05T00:03:16.820Z] > Task :streams:testSrcJar
[2021-11-05T00:03:16.820Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-11-05T00:03:16.820Z] > Task :streams:publishToMavenLocal
[2021-11-05T00:03:17.765Z] > Task :clients:javadoc
[2021-11-05T00:03:17.765Z] > Task :clients:javadocJar
[2021-11-05T00:03:18.711Z] 
[2021-11-05T00:03:18.711Z] > Task :clients:srcJar
[2021-11-05T00:03:18.711Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-11-05T00:03:18.711Z]   - Gradle detected a problem with the following 
location: '/home/jenkins/workspace/Kafka_kafka_3.1/clients/src/generated/java'. 
Reason: Task ':clients:srcJar' uses this output of task 
':clients:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-11-05T00:03:20.480Z] 
[2021-11-05T00:03:20.480Z] > Task :clients:testJar
[2021-11-05T00:03:21.426Z] > Task :clients:testSrcJar
[2021-11-05T00:03:2

Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-04 Thread Jun Rao
Hi, David, Jose and Colin,

Thanks for the reply. A few more comments.

12. It seems that we haven't updated the AdminClient accordingly?

14. "Metadata snapshot is generated and sent to the other inactive
controllers and to brokers". I thought we wanted each broker to generate
its own snapshot independently? If only the controller generates the
snapshot, how do we force other brokers to pick it up?

16. If a feature version is new, one may not want to enable it immediately
after the cluster is upgraded. However, if a feature version has been
stable, requiring every user to run a command to upgrade to that version
seems inconvenient. One way to improve this is for each feature to define
one version as the default. Then, when we upgrade a cluster, we will
automatically upgrade the feature to the default version. An admin could
use the tool to upgrade to a version higher than the default.

20. "The quorum controller can assist with this process by generating a
metadata snapshot after a metadata.version increase has been committed to
the metadata log. This snapshot will be a convenient way to let broker and
controller components rebuild their entire in-memory state following an
upgrade." The new version of the software could read both the new and the
old version. Is generating a new snapshot during upgrade needed?

Jun


On Wed, Nov 3, 2021 at 5:42 PM Colin McCabe  wrote:

> On Tue, Oct 12, 2021, at 10:34, Jun Rao wrote:
> > Hi, David,
> >
> > One more comment.
> >
> > 16. The main reason why KIP-584 requires finalizing a feature manually is
> > that in the ZK world, the controller doesn't know all brokers in a
> cluster.
> > A broker temporarily down is not registered in ZK. in the KRaft world,
> the
> > controller keeps track of all brokers, including those that are
> temporarily
> > down. This makes it possible for the controller to automatically
> finalize a
> > feature---it's safe to do so when all brokers support that feature. This
> > will make the upgrade process much simpler since no manual command is
> > required to turn on a new feature. Have we considered this?
> >
> > Thanks,
> >
> > Jun
>
> Hi Jun,
>
> I guess David commented on this point already, but I'll comment as well. I
> always had the perception that users viewed rolls as potentially risky and
> were looking for ways to reduce the risk. Not enabling features right away
> after installing new software seems like one way to do that. If we had a
> feature to automatically upgrade during a roll, I'm not sure that I would
> recommend that people use it, because if something fails, it makes it
> harder to tell if the new feature is at fault, or something else in the new
> software.
>
> We already tell users to do a "double roll" when going to a new IBP. (Just
> to give background to people who haven't heard that phrase, the first roll
> installs the new software, and the second roll updates the IBP). So this
> KIP-778 mechanism is basically very similar to that, except the second
> thing isn't a roll, but just an upgrade command. So I think this is
> consistent with what we currently do.
>
> Also, just like David said, we can always add auto-upgrade later if there
> is demand...
>
> best,
> Colin
>
>
> >
> > On Thu, Oct 7, 2021 at 5:19 PM Jun Rao  wrote:
> >
> >> Hi, David,
> >>
> >> Thanks for the KIP. A few comments below.
> >>
> >> 10. It would be useful to describe how the controller node determines
> the
> >> RPC version used to communicate to other controller nodes. There seems
> to
> >> be a bootstrap problem. A controller node can't read the log and
> >> therefore the feature level until a quorum leader is elected. But leader
> >> election requires an RPC.
> >>
> >> 11. For downgrades, it would be useful to describe how to determine the
> >> downgrade process (generating new snapshot, propagating the snapshot,
> etc)
> >> has completed. We could block the UpdateFeature request until the
> process
> >> is completed. However, since the process could take time, the request
> could
> >> time out. Another way is through DescribeFeature and the server only
> >> reports downgraded versions after the process is completed.
> >>
> >> 12. Since we are changing UpdateFeaturesRequest, do we need to change
> the
> >> AdminClient api for updateFeatures too?
> >>
> >> 13. For the paragraph starting with "In the absence of an operator
> >> defined value for metadata.version", in KIP-584, we described how to
> >> finalize features with New cluster bootstrap. In that case, it's
> >> inconvenient for the users to have to run an admin tool to finalize the
> >> version for each feature. Instead, the system detects that the /features
> >> path is missing in ZK and thus automatically finalizes every feature
> with
> >> the latest supported version. Could we do something similar in the KRaft
> >> mode?
> >>
> >> 14. After the quorum leader generates a new snapshot, how do we force
> >> other nodes to pick up the new snapshot?
> >>
> >> 15. I agree with Jose th

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.0 #152

2021-11-04 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-786: Emit Metric Client Quota Values

2021-11-04 Thread Mason Legere
Hi All,

Thanks for all the comments and suggestions!  Addressing them in order:


- Have you considered enabling the new metrics by default?
> - If you prefer keeping a configuration to enable them, what about
> renaming it to "client.quota.value.metric.enable" or even
> "quota.value.metric.enable"?


- Initially I wanted to stay away from having them enabled directly having
similar thoughts to what Luke mentioned.  That is, how this code will be
called for every Fetch/Produce API request
along with the fact that for most people's quotas are fairly static.
However, as the overhead is minimal, I have no hard opinions on this.
Additionally, I imagine most users are specifically selecting
which metrics are published anyways within their Prometheus agent config
(or something similar) and to Tom's point, this should also
support filtering.

- I believe I must have made a typo somewhere as
"client.quota.value.metric.enable" sounds more correct to me. Thanks for
the catch -- I will update the KIP.
For the scope of this KIP I preferred to have the additional "client" tag,
as quota values that are not scoped to the client level would not be
emitted.


But I think since the quota value won't change from time to time unless
> admin alter it, it might be waste of resources to record it on each
> produce/fetch API.
> It can alternatively be achieved by using the kafka-configs.sh to describe
> ALL users/clients/default to have a overview of the quota values when
> needed.


This is a good point and something that still bugs me a bit about this
design. It seems "unmetricy" to report just the values - especially when
they are largely static. However, with many clusters and many thousands
of clients, architecting a solution that fits into an already existing
metrics/alerting pipeline becomes quite a bit of work. I agree with Tom's
comment on this.


> In the case it would make more sense to recording the "available
> capacity" that a client has available at a given time as a rate. However,
> in order for the rate to have the correct value some additional work would
> be needed
>
> Could you add some more information about the additional work, and why
> we're rejecting it? This seems like it would be pretty useful
>

Good point, I will update the KIP to give more insight into this. The
difficulty with this is largely related to how brittle the quota
implementation currently is (i.e. quotas are attached to a metric) and
instead of being
able to record `Quota - value` incrementally we would need to aggregate
first for the math to work out. Without a more intrusive refactor this
amounts to adding a new `MeasurableStat` (or something equivalent).
That being said, I moved away from that option given that if you have the
quota value (which is already reported as a rate so the dimensional
analysis works out) then you can easily calculate the "available capacity"
remaining.
To my knowledge most alerting and monitoring platforms provide the
transform to take the difference between two time series so that seemed
like an easier option.

Thanks again for your points, I'll update the KIP!

Mason


On Thu, Nov 4, 2021 at 12:12 PM Bob Barrett
 wrote:

> +1 to Tom's point. Having a metric is a lot more convenient than needing to
> periodically call an API, and anyone who isn't interested in the metric
> should be able to just not collect it.
>
> Thanks for the KIP, Mason! I think this will be very useful.
>
> Under the rejected alternatives, you say the following:
>
> > In the case it would make more sense to recording the "available
> capacity" that a client has available at a given time as a rate. However,
> in order for the rate to have the correct value some additional work would
> be needed
>
> Could you add some more information about the additional work, and why
> we're rejecting it? This seems like it would be pretty useful (I could
> imagine plotting both the current and maximum "available capacity" for a
> client to visualize how much of their quota they are using). Maybe it makes
> sense to defer this work to a later KIP, or maybe it isn't practical at
> all, but it's not currently clear from this KIP why that is.
>
> Thanks,
> Bob
>
> On Thu, Nov 4, 2021 at 4:14 AM Tom Bentley  wrote:
>
> > This is a good point Luke. Unrelatedly, I've considered the value of
> > exposing gauges for the expiry time of SSL certificates, which similarly
> > change rarely. The metrics collected are used to build dashboards, create
> > alerts etc. Thus to have, for example, an alert on approaching
> certificate
> > expiry, a metric has to be collected _somehow_, and if Kafka doesn't
> expose
> > it itself, then it forces people to write, deploy and monitor something
> > which bridges, (e.g.) the admin client and the metric store.
> >
> > I would argue that so long as such nearly-static metrics are in the
> > minority then the cost of collecting and storing them is insignificant
> > compared with the whole set of broker metrics. So on balance it's
>

Re: [DISCUSS] KIP-786: Emit Metric Client Quota Values

2021-11-04 Thread Bob Barrett
+1 to Tom's point. Having a metric is a lot more convenient than needing to
periodically call an API, and anyone who isn't interested in the metric
should be able to just not collect it.

Thanks for the KIP, Mason! I think this will be very useful.

Under the rejected alternatives, you say the following:

> In the case it would make more sense to recording the "available
capacity" that a client has available at a given time as a rate. However,
in order for the rate to have the correct value some additional work would
be needed

Could you add some more information about the additional work, and why
we're rejecting it? This seems like it would be pretty useful (I could
imagine plotting both the current and maximum "available capacity" for a
client to visualize how much of their quota they are using). Maybe it makes
sense to defer this work to a later KIP, or maybe it isn't practical at
all, but it's not currently clear from this KIP why that is.

Thanks,
Bob

On Thu, Nov 4, 2021 at 4:14 AM Tom Bentley  wrote:

> This is a good point Luke. Unrelatedly, I've considered the value of
> exposing gauges for the expiry time of SSL certificates, which similarly
> change rarely. The metrics collected are used to build dashboards, create
> alerts etc. Thus to have, for example, an alert on approaching certificate
> expiry, a metric has to be collected _somehow_, and if Kafka doesn't expose
> it itself, then it forces people to write, deploy and monitor something
> which bridges, (e.g.) the admin client and the metric store.
>
> I would argue that so long as such nearly-static metrics are in the
> minority then the cost of collecting and storing them is insignificant
> compared with the whole set of broker metrics. So on balance it's probably
> better for Kafka to support them out of the box than for people to have to
> invent their components to get the information to the metric store. Also,
> I'm sure many metric collectors and stores will support filtering, so most
> of the cost can probably be avoided for those who don't want the
> functionality.
>
> Kind regards,
>
> Tom
>
> On Thu, Nov 4, 2021 at 10:25 AM Luke Chen  wrote:
>
> > Hi Mason,
> > Thanks for the KIP.
> > But I think since the quota value won't change from time to time unless
> > admin alter it, it might be waste of resources to record it on each
> > produce/fetch API.
> > It can alternatively be achieved by using the kafka-configs.sh to
> describe
> > ALL users/clients/default to have a overview of the quota values when
> > needed.
> >
> > What do you think?
> >
> > Thank you.
> > Luke,
> >
> >
> > On Wed, Nov 3, 2021 at 10:53 PM Mickael Maison  >
> > wrote:
> >
> > > Hi Mason,
> > >
> > > Thanks for the KIP. I think it's a good idea to also emit quota limits
> > > as metrics. It certainly simplifies monitoring/graphing if all the
> > > data come from the same source.
> > >
> > > The KIP looks good overall, just a couple of questions:
> > > - Have you considered enabling the new metrics by default?
> > > - If you prefer keeping a configuration to enable them, what about
> > > renaming it to "client.quota.value.metric.enable" or even
> > > "quota.value.metric.enable"?
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Wed, Oct 27, 2021 at 11:36 PM Mason Legere
> > >  wrote:
> > > >
> > > > Hi All,
> > > >
> > > > Haven't received any feedback on this yet but as it was a small
> change
> > > have
> > > > made a PR showing the functional components: pull request
> > > > 
> > > > Will update the related documentation outlining the new metric
> > attributes
> > > > in a bit.
> > > >
> > > > Best,
> > > > Mason Legere
> > > >
> > > > On Sat, Oct 23, 2021 at 4:00 PM Mason Legere <
> > > mason.leg...@salesforce.com>
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I would like to start a discussion for my proposed KIP-786
> > > > > <
> > >
> >
> https://cwiki.apache.org/confluence/pages/resumedraft.action?draftId=191335406&draftShareId=9a2f3d65-5633-47c8-994c-f5a14738cb1e&;
> > >
> > > which
> > > > > aims to allow client quota values to be emitted as a standard jmx
> > MBean
> > > > > attribute - if enabled in the static broker configuration.
> > > > >
> > > > > Please note that I originally misnumbered this KIP and am
> re-creating
> > > this
> > > > > discussion thread for clarity. The original thread can be found at:
> > > Original
> > > > > Email Thread
> > > > > <
> > >
> >
> https://lists.apache.org/thread.html/r44e154761f22a42e4766f2098d1e33cb54865311f41648ebd9406a4f%40%3Cdev.kafka.apache.org%3E
> > > >
> > > > >
> > > > > Best,
> > > > > Mason Legere
> > > > >
> > >
> >
>


[DISCUSS] KIP-794: Strictly Uniform Sticky Partitioner

2021-11-04 Thread Artem Livshits
Hello,

This is the discussion thread for
https://cwiki.apache.org/confluence/display/KAFKA/KIP-794%3A+Strictly+Uniform+Sticky+Partitioner
.

The proposal is a bug fix for
https://issues.apache.org/jira/browse/KAFKA-10888, but it does include a
client config change, therefore we have a KIP to discuss.

-Artem


Re: [DISCUSS] KIP-714: Client metrics and observability

2021-11-04 Thread David Mao
Hey Magnus,

I noticed that the KIP outlines the initial selectors supported as:

   - client_instance_id - CLIENT_INSTANCE_ID UUID string representation.
   - client_software_name  - client software implementation name.
   - client_software_version  - client software implementation version.

In the given reactive monitoring workflow, we mention that the application
user does not know their client's client instance ID, but it's outlined
that the operator can add a metrics subscription selecting for clientId. I
don't see clientId as one of the supported selectors.
I can see how this would have made sense in a previous iteration given that
the previous client instance ID proposal was to construct the client
instance ID using clientId as a prefix. Now that the client instance ID is
a UUID, would we want to add clientId as a supported selector?
Let me know what you think.

David

On Tue, Oct 19, 2021 at 12:33 PM Magnus Edenhill  wrote:

> Hi Mickael!
>
> Den tis 19 okt. 2021 kl 19:30 skrev Mickael Maison <
> mickael.mai...@gmail.com
> >:
>
> > Hi Magnus,
> >
> > Thanks for the proposal.
> >
> > 1. Looking at the protocol section, isn't "ClientInstanceId" expected
> > to be a field in GetTelemetrySubscriptionsResponseV0? Otherwise, how
> > does a client retrieve this value?
> >
>
> Good catch, it got removed by mistake in one of the edits.
>
>
> >
> > 2. In the client API section, you mention a new method
> > "clientInstanceId()". Can you clarify which interfaces are affected?
> > Is it only Consumer and Producer?
> >
>
> And Admin. Will update the KIP.
>
>
>
> > 3. I'm a bit concerned this is enabled by default. Even if the data
> > collected is supposed to be not sensitive, I think this can be
> > problematic in some environments. Also users don't seem to have the
> > choice to only expose some metrics. Knowing how much data transit
> > through some applications can be considered critical.
> >
>
> The broker already knows how much data transits through the client though,
> right?
> Care has been taken not to expose information in the standard metrics that
> might
> reveal sensitive information.
>
> Do you have an example of how the proposed metrics could leak sensitive
> information?
> As for limiting the what metrics to export; I guess that could make sense
> in some
> very sensitive use-cases, but those users might disable metrics altogether
> for now.
> Could these concerns be addressed by a later KIP?
>
>
>
> >
> > 4. As a user, how do you know if your application is actively sending
> > metrics? Are there new metrics exposing what's going on, like how much
> > data is being sent?
> >
>
> That's a good question.
> Since the proposed metrics interface is not aimed at, or directly available
> to, the application
> I guess there's little point of adding it here, but instead adding
> something to the
> existing JMX metrics?
> Do you have any suggestions?
>
>
>
> > 5. If all metrics are enabled on a regular Consumer or Producer, do
> > you have an idea how much throughput this would use?
> >
>
> It depends on the number of partition/topics/etc the client is producing
> to/consuming from.
> I'll add some sizes to the KIP for some typical use-cases.
>
>
> Thanks,
> Magnus
>
>
> > Thanks
> >
> > On Tue, Oct 19, 2021 at 5:06 PM Magnus Edenhill 
> > wrote:
> > >
> > > Den tis 19 okt. 2021 kl 13:22 skrev Tom Bentley :
> > >
> > > > Hi Magnus,
> > > >
> > > > I reviewed the KIP since you called the vote (sorry for not reviewing
> > when
> > > > you announced your intention to call the vote). I have a few
> questions
> > on
> > > > some of the details.
> > > >
> > > > 1. There's no Javadoc on ClientTelemetryPayload.data(), so I don't
> know
> > > > whether the payload is exposed through this method as compressed or
> > not.
> > > > Later on you say "Decompression of the payloads will be handled by
> the
> > > > broker metrics plugin, the broker should expose a suitable
> > decompression
> > > > API to the metrics plugin for this purpose.", which suggests it's the
> > > > compressed data in the buffer, but then we don't know which codec was
> > used,
> > > > nor the API via which the plugin should decompress it if required for
> > > > forwarding to the ultimate metrics store. Should the
> > ClientTelemetryPayload
> > > > expose a method to get the compression and a decompressor?
> > > >
> > >
> > > Good point, updated.
> > >
> > >
> > >
> > > > 2. The client-side API is expressed as StringOrError
> > > > ClientInstance::ClientInstanceId(int timeout_ms). I understand that
> > you're
> > > > thinking about the librdkafka implementation, but it would be good to
> > show
> > > > the API as it would appear on the Apache Kafka clients.
> > > >
> > >
> > > This was meant as pseudo-code, but I changed it to Java.
> > >
> > >
> > > > 3. "PushTelemetryRequest|Response - protocol request used by the
> > client to
> > > > send metrics to any broker it is connected to." To be clear, this
> means
> > > > that the client can choose a

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #553

2021-11-04 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-788: Allow configuring num.network.threads per listener

2021-11-04 Thread David Jacot
+1 (binding), thanks for the KIP!

Best,
David

On Thu, Nov 4, 2021 at 12:44 PM Tom Bentley  wrote:
>
> Hi Mickael,
>
> Thanks for the KIP, +1 (binding).
>
> Cheers,
>
> Tom
>
> On Thu, Nov 4, 2021 at 11:40 AM Rajini Sivaram 
> wrote:
>
> > Hi Mickael,
> >
> > +1 (binding)
> > Thanks for the KIP!
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Thu, Nov 4, 2021 at 11:00 AM Mickael Maison 
> > wrote:
> >
> > > Hi Luke,
> > >
> > > I've updated the KIP accordingly.
> > >
> > > Thanks
> > >
> > > On Thu, Nov 4, 2021 at 8:41 AM Luke Chen  wrote:
> > > >
> > > > Hi Mickael,
> > > > Thanks for the KIP.
> > > > It's great to have the capability to fine tune the number of threads
> > per
> > > > listener!
> > > >
> > > > Just 2 minor comments for the KIP:
> > > > 1. The discussion thread is not attached in KIP
> > > > 2. Israel raised the case-sensitive comment and your response didn't
> > put
> > > > into the KIP
> > > >
> > > > Otherwise, LGTM!
> > > > +1 (non-binding)
> > > >
> > > > Thank you.
> > > > Luke
> > > >
> > > > On Wed, Nov 3, 2021 at 8:17 PM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to start the vote on KIP-788. It will allow setting the
> > > > > number of network threads per listener.
> > > > >
> > > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-788%3A+Allow+configuring+num.network.threads+per+listener
> > > > >
> > > > > Please let me know if you have any feedback.
> > > > > Thanks
> > > > >
> > >
> >


Re: [VOTE] KIP-788: Allow configuring num.network.threads per listener

2021-11-04 Thread Tom Bentley
Hi Mickael,

Thanks for the KIP, +1 (binding).

Cheers,

Tom

On Thu, Nov 4, 2021 at 11:40 AM Rajini Sivaram 
wrote:

> Hi Mickael,
>
> +1 (binding)
> Thanks for the KIP!
>
> Regards,
>
> Rajini
>
>
> On Thu, Nov 4, 2021 at 11:00 AM Mickael Maison 
> wrote:
>
> > Hi Luke,
> >
> > I've updated the KIP accordingly.
> >
> > Thanks
> >
> > On Thu, Nov 4, 2021 at 8:41 AM Luke Chen  wrote:
> > >
> > > Hi Mickael,
> > > Thanks for the KIP.
> > > It's great to have the capability to fine tune the number of threads
> per
> > > listener!
> > >
> > > Just 2 minor comments for the KIP:
> > > 1. The discussion thread is not attached in KIP
> > > 2. Israel raised the case-sensitive comment and your response didn't
> put
> > > into the KIP
> > >
> > > Otherwise, LGTM!
> > > +1 (non-binding)
> > >
> > > Thank you.
> > > Luke
> > >
> > > On Wed, Nov 3, 2021 at 8:17 PM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to start the vote on KIP-788. It will allow setting the
> > > > number of network threads per listener.
> > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-788%3A+Allow+configuring+num.network.threads+per+listener
> > > >
> > > > Please let me know if you have any feedback.
> > > > Thanks
> > > >
> >
>


Re: Permissions to contribute to Apache Kafka

2021-11-04 Thread Bruno Cadonna

Hi Vicky,

You are all setup now!

Best,
Bruno

On 04.11.21 11:40, Vasiliki Papavasileiou wrote:

Wiki ID: vicky_papavas
Jira ID: vicky_papavas



Re: [VOTE] KIP-788: Allow configuring num.network.threads per listener

2021-11-04 Thread Rajini Sivaram
Hi Mickael,

+1 (binding)
Thanks for the KIP!

Regards,

Rajini


On Thu, Nov 4, 2021 at 11:00 AM Mickael Maison 
wrote:

> Hi Luke,
>
> I've updated the KIP accordingly.
>
> Thanks
>
> On Thu, Nov 4, 2021 at 8:41 AM Luke Chen  wrote:
> >
> > Hi Mickael,
> > Thanks for the KIP.
> > It's great to have the capability to fine tune the number of threads per
> > listener!
> >
> > Just 2 minor comments for the KIP:
> > 1. The discussion thread is not attached in KIP
> > 2. Israel raised the case-sensitive comment and your response didn't put
> > into the KIP
> >
> > Otherwise, LGTM!
> > +1 (non-binding)
> >
> > Thank you.
> > Luke
> >
> > On Wed, Nov 3, 2021 at 8:17 PM Mickael Maison 
> > wrote:
> >
> > > Hi all,
> > >
> > > I'd like to start the vote on KIP-788. It will allow setting the
> > > number of network threads per listener.
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-788%3A+Allow+configuring+num.network.threads+per+listener
> > >
> > > Please let me know if you have any feedback.
> > > Thanks
> > >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #552

2021-11-04 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 508625 lines...]
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] > Task :streams:integrationTest
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountUnlimitedWindows STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountUnlimitedWindows PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED
[2021-11-04T11:19:55.550Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-6: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-7: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-8: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED
[2021-11-04T11:19:55.550Z] 
[2021-11-04T11:19:55.550Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED
[2021-11-04T11:20:05.914Z] 
[2021-11-04T11:20:05.914Z] 
org.apache.kafka.streams.integration.TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor STARTED
[2021-11-04T11:20:05.914Z] 
[2021-11-04T11:20:05.914Z] 
org.apache.kafka.streams.integration.TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor PASSED
[2021-11-04T11:20:22.492Z] 
[2021-11-04T11:20:22.492Z] > Task :core:integrationTest
[2021-11-04T11:20:22.492Z] 
[2021-11-04T11:20:22.492Z] PlaintextConsumerTest > 
testMultiConsumerStickyAssignor() PASSED
[2021-11-04T11:20:22.492Z] 
[2021-11-04T11:20:22.492Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() STARTED
[2021-11-04T11:20:31.355Z] 
[2021-11-04T11:20:31.355Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxByt

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #151

2021-11-04 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 345072 lines...]
[2021-11-04T11:15:59.786Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition() STARTED
[2021-11-04T11:16:04.377Z] 
[2021-11-04T11:16:04.377Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition() PASSED
[2021-11-04T11:16:04.377Z] 
[2021-11-04T11:16:04.377Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition() STARTED
[2021-11-04T11:16:08.648Z] 
[2021-11-04T11:16:08.648Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition() PASSED
[2021-11-04T11:16:08.648Z] 
[2021-11-04T11:16:08.648Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition() STARTED
[2021-11-04T11:16:13.041Z] 
[2021-11-04T11:16:13.041Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition() PASSED
[2021-11-04T11:16:13.041Z] 
[2021-11-04T11:16:13.041Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly() STARTED
[2021-11-04T11:16:16.682Z] 
[2021-11-04T11:16:16.682Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly() PASSED
[2021-11-04T11:16:16.682Z] 
[2021-11-04T11:16:16.682Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition() STARTED
[2021-11-04T11:16:21.106Z] 
[2021-11-04T11:16:21.106Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition() PASSED
[2021-11-04T11:16:21.106Z] 
[2021-11-04T11:16:21.106Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsNonExistingGroup() STARTED
[2021-11-04T11:16:25.403Z] 
[2021-11-04T11:16:25.403Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsNonExistingGroup() PASSED
[2021-11-04T11:16:25.403Z] 
[2021-11-04T11:16:25.403Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() STARTED
[2021-11-04T11:16:28.528Z] 
[2021-11-04T11:16:28.528Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() PASSED
[2021-11-04T11:16:29.570Z] 
[2021-11-04T11:16:29.570Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() STARTED
[2021-11-04T11:16:35.600Z] 
[2021-11-04T11:16:35.600Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() PASSED
[2021-11-04T11:16:35.600Z] 
[2021-11-04T11:16:35.600Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-11-04T11:16:39.940Z] 
[2021-11-04T11:16:39.940Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-11-04T11:16:39.940Z] 
[2021-11-04T11:16:39.940Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() STARTED
[2021-11-04T11:16:44.366Z] 
[2021-11-04T11:16:44.366Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() PASSED
[2021-11-04T11:16:44.366Z] 
[2021-11-04T11:16:44.366Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() STARTED
[2021-11-04T11:16:48.717Z] 
[2021-11-04T11:16:48.717Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() PASSED
[2021-11-04T11:16:48.717Z] 
[2021-11-04T11:16:48.717Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() STARTED
[2021-11-04T11:16:59.123Z] 
[2021-11-04T11:16:59.123Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() PASSED
[2021-11-04T11:16:59.123Z] 
[2021-11-04T11:16:59.123Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() STARTED
[2021-11-04T11:17:03.377Z] 
[2021-11-04T11:17:03.377Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() PASSED
[2021-11-04T11:17:03.377Z] 
[2021-11-04T11:17:03.377Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() STARTED
[2021-11-04T11:17:08.991Z] 
[2021-11-04T11:17:08.991Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() PASSED
[2021-11-04T11:17:08.991Z] 
[2021-11-04T11:17:08.991Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-11-04T11:17:13.347Z] 
[2021-11-04T11:17:13.347Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-11-04T11:17:13.347Z] 
[2021-11-04T11:17:13.347Z] TopicCommandIntegrationTest > 
testListTopicsWithExcludeInternal() STARTED
[2021-11-04T11:17:19.056Z] 
[2021-11-04T11:17:19.056Z] TopicCommandIntegrationTest > 
testListTopicsWithExcludeInternal() PASSED
[2

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.1 #3

2021-11-04 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-786: Emit Metric Client Quota Values

2021-11-04 Thread Tom Bentley
This is a good point Luke. Unrelatedly, I've considered the value of
exposing gauges for the expiry time of SSL certificates, which similarly
change rarely. The metrics collected are used to build dashboards, create
alerts etc. Thus to have, for example, an alert on approaching certificate
expiry, a metric has to be collected _somehow_, and if Kafka doesn't expose
it itself, then it forces people to write, deploy and monitor something
which bridges, (e.g.) the admin client and the metric store.

I would argue that so long as such nearly-static metrics are in the
minority then the cost of collecting and storing them is insignificant
compared with the whole set of broker metrics. So on balance it's probably
better for Kafka to support them out of the box than for people to have to
invent their components to get the information to the metric store. Also,
I'm sure many metric collectors and stores will support filtering, so most
of the cost can probably be avoided for those who don't want the
functionality.

Kind regards,

Tom

On Thu, Nov 4, 2021 at 10:25 AM Luke Chen  wrote:

> Hi Mason,
> Thanks for the KIP.
> But I think since the quota value won't change from time to time unless
> admin alter it, it might be waste of resources to record it on each
> produce/fetch API.
> It can alternatively be achieved by using the kafka-configs.sh to describe
> ALL users/clients/default to have a overview of the quota values when
> needed.
>
> What do you think?
>
> Thank you.
> Luke,
>
>
> On Wed, Nov 3, 2021 at 10:53 PM Mickael Maison 
> wrote:
>
> > Hi Mason,
> >
> > Thanks for the KIP. I think it's a good idea to also emit quota limits
> > as metrics. It certainly simplifies monitoring/graphing if all the
> > data come from the same source.
> >
> > The KIP looks good overall, just a couple of questions:
> > - Have you considered enabling the new metrics by default?
> > - If you prefer keeping a configuration to enable them, what about
> > renaming it to "client.quota.value.metric.enable" or even
> > "quota.value.metric.enable"?
> >
> > Thanks,
> > Mickael
> >
> > On Wed, Oct 27, 2021 at 11:36 PM Mason Legere
> >  wrote:
> > >
> > > Hi All,
> > >
> > > Haven't received any feedback on this yet but as it was a small change
> > have
> > > made a PR showing the functional components: pull request
> > > 
> > > Will update the related documentation outlining the new metric
> attributes
> > > in a bit.
> > >
> > > Best,
> > > Mason Legere
> > >
> > > On Sat, Oct 23, 2021 at 4:00 PM Mason Legere <
> > mason.leg...@salesforce.com>
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I would like to start a discussion for my proposed KIP-786
> > > > <
> >
> https://cwiki.apache.org/confluence/pages/resumedraft.action?draftId=191335406&draftShareId=9a2f3d65-5633-47c8-994c-f5a14738cb1e&;
> >
> > which
> > > > aims to allow client quota values to be emitted as a standard jmx
> MBean
> > > > attribute - if enabled in the static broker configuration.
> > > >
> > > > Please note that I originally misnumbered this KIP and am re-creating
> > this
> > > > discussion thread for clarity. The original thread can be found at:
> > Original
> > > > Email Thread
> > > > <
> >
> https://lists.apache.org/thread.html/r44e154761f22a42e4766f2098d1e33cb54865311f41648ebd9406a4f%40%3Cdev.kafka.apache.org%3E
> > >
> > > >
> > > > Best,
> > > > Mason Legere
> > > >
> >
>


Re: [VOTE] KIP-788: Allow configuring num.network.threads per listener

2021-11-04 Thread Mickael Maison
Hi Luke,

I've updated the KIP accordingly.

Thanks

On Thu, Nov 4, 2021 at 8:41 AM Luke Chen  wrote:
>
> Hi Mickael,
> Thanks for the KIP.
> It's great to have the capability to fine tune the number of threads per
> listener!
>
> Just 2 minor comments for the KIP:
> 1. The discussion thread is not attached in KIP
> 2. Israel raised the case-sensitive comment and your response didn't put
> into the KIP
>
> Otherwise, LGTM!
> +1 (non-binding)
>
> Thank you.
> Luke
>
> On Wed, Nov 3, 2021 at 8:17 PM Mickael Maison 
> wrote:
>
> > Hi all,
> >
> > I'd like to start the vote on KIP-788. It will allow setting the
> > number of network threads per listener.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-788%3A+Allow+configuring+num.network.threads+per+listener
> >
> > Please let me know if you have any feedback.
> > Thanks
> >


Permissions to contribute to Apache Kafka

2021-11-04 Thread Vasiliki Papavasileiou
Wiki ID: vicky_papavas
Jira ID: vicky_papavas


Re: [DISCUSS] KIP-786: Emit Metric Client Quota Values

2021-11-04 Thread Luke Chen
Hi Mason,
Thanks for the KIP.
But I think since the quota value won't change from time to time unless
admin alter it, it might be waste of resources to record it on each
produce/fetch API.
It can alternatively be achieved by using the kafka-configs.sh to describe
ALL users/clients/default to have a overview of the quota values when
needed.

What do you think?

Thank you.
Luke,


On Wed, Nov 3, 2021 at 10:53 PM Mickael Maison 
wrote:

> Hi Mason,
>
> Thanks for the KIP. I think it's a good idea to also emit quota limits
> as metrics. It certainly simplifies monitoring/graphing if all the
> data come from the same source.
>
> The KIP looks good overall, just a couple of questions:
> - Have you considered enabling the new metrics by default?
> - If you prefer keeping a configuration to enable them, what about
> renaming it to "client.quota.value.metric.enable" or even
> "quota.value.metric.enable"?
>
> Thanks,
> Mickael
>
> On Wed, Oct 27, 2021 at 11:36 PM Mason Legere
>  wrote:
> >
> > Hi All,
> >
> > Haven't received any feedback on this yet but as it was a small change
> have
> > made a PR showing the functional components: pull request
> > 
> > Will update the related documentation outlining the new metric attributes
> > in a bit.
> >
> > Best,
> > Mason Legere
> >
> > On Sat, Oct 23, 2021 at 4:00 PM Mason Legere <
> mason.leg...@salesforce.com>
> > wrote:
> >
> > > Hi All,
> > >
> > > I would like to start a discussion for my proposed KIP-786
> > > <
> https://cwiki.apache.org/confluence/pages/resumedraft.action?draftId=191335406&draftShareId=9a2f3d65-5633-47c8-994c-f5a14738cb1e&;>
> which
> > > aims to allow client quota values to be emitted as a standard jmx MBean
> > > attribute - if enabled in the static broker configuration.
> > >
> > > Please note that I originally misnumbered this KIP and am re-creating
> this
> > > discussion thread for clarity. The original thread can be found at:
> Original
> > > Email Thread
> > > <
> https://lists.apache.org/thread.html/r44e154761f22a42e4766f2098d1e33cb54865311f41648ebd9406a4f%40%3Cdev.kafka.apache.org%3E
> >
> > >
> > > Best,
> > > Mason Legere
> > >
>


[jira] [Resolved] (KAFKA-13430) Remove broker-wide quota properties from the documentation

2021-11-04 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-13430.
-
Fix Version/s: 3.0.0
   3.1.0
 Reviewer: David Jacot
   Resolution: Fixed

> Remove broker-wide quota properties from the documentation
> --
>
> Key: KAFKA-13430
> URL: https://issues.apache.org/jira/browse/KAFKA-13430
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 3.1.0, 3.0.0
>
>
> I found this problem while working on 
> [KAFKA-13341|https://issues.apache.org/jira/browse/KAFKA-13341].
> Broker-wide quota properties ({{quota.producer.default}}, 
> {{quota.consumer.default}}) are [removed in 
> 3.0|https://issues.apache.org/jira/browse/KAFKA-12591], but it is not applied 
> to the documentation yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] dajac merged pull request #380: KAFKA-13430: Remove broker-wide quota properties from the documentation

2021-11-04 Thread GitBox


dajac merged pull request #380:
URL: https://github.com/apache/kafka-site/pull/380


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-13433) JsonConverter's method convertToJson when field is optional with default value and value is null, return default value.

2021-11-04 Thread Aiden Gong (Jira)
Aiden Gong created KAFKA-13433:
--

 Summary: JsonConverter's method convertToJson when field is 
optional with default value and  value is null, return default value.
 Key: KAFKA-13433
 URL: https://issues.apache.org/jira/browse/KAFKA-13433
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 3.0.0
Reporter: Aiden Gong
 Attachments: image-2021-11-04-16-25-18-890.png, 
image-2021-11-04-16-25-18-975.png

JsonConverter's method convertToJson when field is optional with default value 
and value is null, return default value. !image-2021-11-04-16-25-18-975.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-788: Allow configuring num.network.threads per listener

2021-11-04 Thread Luke Chen
Hi Mickael,
Thanks for the KIP.
It's great to have the capability to fine tune the number of threads per
listener!

Just 2 minor comments for the KIP:
1. The discussion thread is not attached in KIP
2. Israel raised the case-sensitive comment and your response didn't put
into the KIP

Otherwise, LGTM!
+1 (non-binding)

Thank you.
Luke

On Wed, Nov 3, 2021 at 8:17 PM Mickael Maison 
wrote:

> Hi all,
>
> I'd like to start the vote on KIP-788. It will allow setting the
> number of network threads per listener.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-788%3A+Allow+configuring+num.network.threads+per+listener
>
> Please let me know if you have any feedback.
> Thanks
>