Re: [ANNOUNCE] New committer: Yash Mayya

2023-09-22 Thread Chaitanya Mukka
Congrats, Yash!! Well deserved.

Chaitanya Mukka
On 21 Sep 2023 at 8:58 PM +0530, Bruno Cadonna , wrote:
> Hi all,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Yash Mayya.
>
> Yash's major contributions are around Connect.
>
> Yash authored the following KIPs:
>
> KIP-793: Allow sink connectors to be used with topic-mutating SMTs
> KIP-882: Kafka Connect REST API configuration validation timeout
> improvements
> KIP-970: Deprecate and remove Connect's redundant task configurations
> endpoint
> KIP-980: Allow creating connectors in a stopped state
>
> Overall, Yash is known for insightful and friendly input to discussions
> and his high quality contributions.
>
> Congratulations, Yash!
>
> Thanks,
>
> Bruno (on behalf of the Apache Kafka PMC)


[GitHub] [kafka-site] divijvaidya merged pull request #549: MINOR: Fix site formatting for configs

2023-09-22 Thread via GitHub


divijvaidya merged PR #549:
URL: https://github.com/apache/kafka-site/pull/549


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: Apache Kafka 3.6.0 release

2023-09-22 Thread Divij Vaidya
Found a bug while testing TS feature in 3.6 -
https://issues.apache.org/jira/browse/KAFKA-15481

I don't consider it as a blocker for release since it's a concurrency
bug that should occur rarely for a feature which is early access.
Sharing it here as FYI in case someone else thinks differently.

--
Divij Vaidya

On Fri, Sep 22, 2023 at 1:26 AM Satish Duggana  wrote:
>
> Thanks Divij for raising a PR for doc formatting issue.
>
> On Thu, 21 Sep, 2023, 2:22 PM Divij Vaidya,  wrote:
>
> > Hey Satish
> >
> > I filed a PR to fix the website formatting bug in 3.6 documentation -
> > https://github.com/apache/kafka/pull/14419
> > Please take a look when you get a chance.
> >
> > --
> > Divij Vaidya
> >
> > On Tue, Sep 19, 2023 at 5:36 PM Chris Egerton 
> > wrote:
> > >
> > > Hi Satish,
> > >
> > > I think this qualifies as a blocker. This API has been around for years
> > now
> > > and, while we don't document it as not exposing duplicates*, it has come
> > > with that implicit contract since its inception. More importantly, it has
> > > also never exposed plugins that cannot be used on the worker. This change
> > > in behavior not only introduces duplicates*, it causes unreachable
> > plugins
> > > to be displayed. With this in mind, it seems to qualify pretty clearly
> > as a
> > > regression and we should not put out a release that includes it.
> > >
> > > * - Really, these aren't duplicates; rather, they're multiple copies of
> > the
> > > same plugin that come from different locations on the worker
> > >
> > > Best,
> > >
> > > Chris
> > >
> > > On Tue, Sep 19, 2023 at 4:31 AM Satish Duggana  > >
> > > wrote:
> > >
> > > > Hi Greg,
> > > > Is this API documented that it does not return duplicate entries?
> > > >
> > > > Can we also get an opinion from PMC/Committers who have KafkaConnect
> > > > expertise on whether this issue is a release blocker?
> > > >
> > > > If we agree that it is not a release blocker then we can have a
> > > > release note clarifying this behaviour and add a reference to the JIRA
> > > > that follows up on the possible solutions.
> > > >
> > > > Thanks,
> > > > Satish.
> > > >
> > > >
> > > > On Tue, 19 Sept 2023 at 03:29, Greg Harris
> > 
> > > > wrote:
> > > > >
> > > > > Hey Satish,
> > > > >
> > > > > After investigating further, I believe that this is a regression, but
> > > > > mostly a cosmetic one.
> > > > > I don't think there is significant risk of breaking clients with this
> > > > > change, but it would be confusing for users, so I'd still like to get
> > > > > the fix into the next RC.
> > > > > I've opened a PR here: https://github.com/apache/kafka/pull/14398
> > and
> > > > > I'll work to get it merged promptly.
> > > > >
> > > > > Thanks!
> > > > >
> > > > > On Mon, Sep 18, 2023 at 11:54 AM Greg Harris 
> > > > wrote:
> > > > > >
> > > > > > Hi Satish,
> > > > > >
> > > > > > While validating 3.6.0-rc0, I noticed this regression as compared
> > to
> > > > > > 3.5.1: https://issues.apache.org/jira/browse/KAFKA-15473
> > > > > >
> > > > > > Impact: The `connector-plugins` endpoint lists duplicates which may
> > > > > > cause confusion for users, or poor behavior in clients.
> > > > > > Using the other REST API endpoints appears unaffected.
> > > > > > I'll open a PR for this later today.
> > > > > >
> > > > > > Thanks,
> > > > > > Greg
> > > > > >
> > > > > > On Thu, Sep 14, 2023 at 11:56 AM Satish Duggana
> > > > > >  wrote:
> > > > > > >
> > > > > > > Thanks Justine for the update. I saw in the morning that these
> > > > changes
> > > > > > > are pushed to trunk and 3.6.
> > > > > > >
> > > > > > > ~Satish.
> > > > > > >
> > > > > > > On Thu, 14 Sept 2023 at 21:54, Justine Olshan
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > Hi Satish,
> > > > > > > > We were able to merge
> > > > > > > > https://issues.apache.org/jira/browse/KAFKA-15459 yesterday
> > > > > > > > and pick to 3.6.
> > > > > > > >
> > > > > > > > Hopefully nothing more from me on this release.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Justine
> > > > > > > >
> > > > > > > > On Wed, Sep 13, 2023 at 9:51 PM Satish Duggana <
> > > > satish.dugg...@gmail.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Luke for the update.
> > > > > > > > >
> > > > > > > > > ~Satish.
> > > > > > > > >
> > > > > > > > > On Thu, 14 Sept 2023 at 07:29, Luke Chen 
> > > > wrote:
> > > > > > > > > >
> > > > > > > > > > Hi Satish,
> > > > > > > > > >
> > > > > > > > > > Since this PR:
> > > > > > > > > > https://github.com/apache/kafka/pull/14366 only changes
> > the
> > > > doc, I've
> > > > > > > > > > backported to 3.6 branch. FYI.
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > > Luke
> > > > > > > > > >
> > > > > > > > > > On Thu, Sep 14, 2023 at 12:15 AM Justine Olshan
> > > > > > > > > >  wrote:
> > > > > > > > > >
> > > > > > > > > > > Hey Satish -- yes, you are correct. KAFKA-15459 only
> > affects
> > > > 3.6.
> > > > > > > > > > > PR should be finalized soon.

[jira] [Created] (KAFKA-15485) Upgrade to JDK-21 (LTS release)

2023-09-22 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-15485:


 Summary: Upgrade to JDK-21 (LTS release)
 Key: KAFKA-15485
 URL: https://issues.apache.org/jira/browse/KAFKA-15485
 Project: Kafka
  Issue Type: Improvement
Reporter: Divij Vaidya
 Fix For: 3.7.0


JDK-21 is the latest LTS release which reached GA on 19th Sept 2023. This 
ticket aims to upgrade JDK used by Kafka to JDK-21 (currently it's JDK20).

Thanks to proactive work done by [~ijuma] earlier [1][2][3], I do not 
anticipate major hiccups while upgrading to JDK-21.

As part of this JIRA we want to:
1. Upgrade Kafka to JDK 21
2. Replace the CI build for JDK 20 with JDK 21 (similar to [3] below)

As a stretch goal for this JIRA, we want to:
1. Explore the new features for JDK-21 (like virtual threads) and create 
separate Jira tickets to explore their usage for Kafka

[1] [https://github.com/apache/kafka/pull/13840]
 [2] [https://github.com/apache/kafka/pull/13582]
[3] [https://github.com/apache/kafka/pull/12948] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka-site] yashmayya opened a new pull request, #550: MINOR: Add Yash Mayya to committers

2023-09-22 Thread via GitHub


yashmayya opened a new pull request, #550:
URL: https://github.com/apache/kafka-site/pull/550

   https://github.com/apache/kafka-site/assets/23502577/7a4bb4b3-b3f4-4133-a61e-cf0fc5060cc8";>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15486) Include NIO exceptions as I/O exceptions to be part of disk failure handling

2023-09-22 Thread Alexandre Dupriez (Jira)
Alexandre Dupriez created KAFKA-15486:
-

 Summary: Include NIO exceptions as I/O exceptions to be part of 
disk failure handling
 Key: KAFKA-15486
 URL: https://issues.apache.org/jira/browse/KAFKA-15486
 Project: Kafka
  Issue Type: Improvement
  Components: core, jbod
Reporter: Alexandre Dupriez


Currently, Apache Kafka offers the ability to detect and capture I/O errors 
when accessing the file system via the standard {{IOException}} from the JDK. 
There are cases however, where I/O errors are only reported via exceptions such 
as {{{}BufferOverflowException{}}}, without associated {{IOException}} on the 
produce or read path, so that the data volume is not detected as unhealthy and 
not included in the list of offline directories.

Specifically, we faced the following scenario on a broker:
 * The data volume hosting a log directory became saturated.
 * As expected, {{IOException}} were generated on the read/write path.
 * The log directory was set as offline and since it was the only log directory 
configured on the broker, Kafka automatically shut down.
 * Additional space was added to the data volume.
 * Kafka was then restarted.
 * No more {{IOException}} occurred, however {{BufferOverflowException}} *[*]* 
were raised while trying to delete log segments in oder to honour the retention 
settings of a topic. The log directory was not moved to offline and the 
exceptions kept re-occurring indefinitely.

The retention settings were therefore not applied in this case. The mitigation 
consisted in restarting Kafka.

It may be worth considering adding {{BufferOverflowException}} and 
{{BufferUnderflowException}} (and any other related exception from the JDK NIO 
library which surfaces an I/O error) to the current {{IOException}} as a proxy 
of storage I/O failure, although there may be known unintended consequences in 
doing so which is the reason they were not added already, or, it may be too 
marginal of an impact to modify the main I/O failure handing path to risk 
exposing it to such unknown unintended consequences.

*[*]*
{code:java}
java.nio.BufferOverflowException     at 
java.base/java.nio.Buffer.nextPutIndex(Buffer.java:674)     at 
java.base/java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:882)     at 
kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)     at 
kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)     at 
kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:506)     at 
kafka.log.Log.$anonfun$roll$8(Log.scala:2066)     at 
kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:2066)     at 
scala.Option.foreach(Option.scala:437)     at 
kafka.log.Log.$anonfun$roll$2(Log.scala:2066)     at 
kafka.log.Log.roll(Log.scala:2482)     at 
kafka.log.Log.maybeRoll(Log.scala:2017)     at 
kafka.log.Log.append(Log.scala:1292)     at 
kafka.log.Log.appendAsFollower(Log.scala:1155)     at 
kafka.cluster.Partition.doAppendRecordsToFollowerOrFutureReplica(Partition.scala:1023)
     at 
kafka.cluster.Partition.appendRecordsToFollowerOrFutureReplica(Partition.scala:1030)
     at 
kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:178)
     at 
kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7(AbstractFetcherThread.scala:356)
     at scala.Option.foreach(Option.scala:437)     at 
kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6(AbstractFetcherThread.scala:345)
     at 
kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6$adapted(AbstractFetcherThread.scala:344)
     at 
kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62)
     at 
scala.collection.convert.JavaCollectionWrappers$JMapWrapperLike.foreachEntry(JavaCollectionWrappers.scala:359)
     at 
scala.collection.convert.JavaCollectionWrappers$JMapWrapperLike.foreachEntry$(JavaCollectionWrappers.scala:355)
     at 
scala.collection.convert.JavaCollectionWrappers$AbstractJMapWrapper.foreachEntry(JavaCollectionWrappers.scala:309)
     at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:344)
     at 
kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:141)
     at 
kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:140)
     at scala.Option.foreach(Option.scala:437)     at 
kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:140)  
   at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:123)     
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
{code}
 

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15487) CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 12.0.1

2023-09-22 Thread Rafael Rios Saavedra (Jira)
Rafael Rios Saavedra created KAFKA-15487:


 Summary: CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 
10.0.16, 11.0.16, 12.0.1
 Key: KAFKA-15487
 URL: https://issues.apache.org/jira/browse/KAFKA-15487
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.7.0, 2.6.1
Reporter: Rafael Rios Saavedra
Assignee: Dongjin Lee
 Fix For: 2.8.0, 2.7.1, 2.6.2, 3.0.0


*CVE-2021-28165* vulnerability affects Jetty versions up to *9.4.38*. For more 
information see [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28165] 

Upgrading to Jetty version *9.4.39* should address this issue 
([https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325)|https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.39.v20210325].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-858: Handle JBOD broker disk failure in KRaft

2023-09-22 Thread Ron Dagostino
Hi Igor.  Someone just asked about the case where a broker with a
single log directory restarts with a blank disk.  I looked at the
"Metadata caching" section, and I don't think it covers it as
currently written.  The PartitionRecord will not have an explicit UUID
for brokers that have just a single log directory (the idea was to
save space), but the UUID is inferred to be the UUID associated with
the single log directory that the broker had registered.  Assume a
broker restarts with an empty disk.  That disk will get a new UUID,
and upon broker registration the controller will see the UUID mismatch
between what the broker is presenting now and what it had presented
the last time it registered.  So we need to deal with this possibility
even for the case where a broker has a single log directory.  WDYT?

Ron

On Tue, Sep 19, 2023 at 10:04 AM Ron Dagostino  wrote:
>
> Ok, great, that makes sense, Igor.  Thanks.  +1 (binding) on the KIP from me.
>
> Ron
>
> > On Sep 13, 2023, at 11:58 AM, Igor Soarez  wrote:
> >
> > Hi Ron,
> >
> > Thanks for drilling down on this. I think the KIP isn't really clear here,
> > and the metadata caching section you quoted needs clarification.
> >
> > The "hosting broker's latest registration" refers to the previous,
> > not the current registration. The registrations are only compared by
> > the controller, when handling the broker registration request.
> >
> > Suppose broker b1 hosts two partitions, t-1 and t-2, in two
> > directories, d1 and d2. The broker is registered, and the
> > metadata correlates the replicas to their respective directories.
> > i.e. OnlineLogDirs=[d1,d2] and OfflineLogDirs=false
> >
> > The broker is then reconfigured to remove t-2 from log.dirs, and at startup,
> > the registration request shows OnlineLogDirs=[d1] and OfflineLogDirs=false.
> > The previous registration will only be replaced after a new successful
> > registration, regardless of how quickly or how often b1 restarts.
> > The controller compares the previous registration, and notices
> > that one of the directories has been removed.
> > So for any replica hosted in the broker that is assigned to that
> > missing log directory, a logical metadata update takes place
> > that assigned them to Uuid.OfflineDir, so Assignment.Directory
> > is updated for t-2. This value is indicates that the replica
> > is offline — I have updated the section you quoted to address this.
> >
> > Once the broker catches up with metadata, it will select the only
> > configured log directory — d1 — for any partitions assigned to
> > Uuid.OfflineDir, and update the assignment.
> >
> > Best,
> >
> > --
> > Igor
> >
> >
> >


Re: [DISCUSS] KIP-714: Client metrics and observability

2023-09-22 Thread Kirk True
Hi Andrew/Jun,

I want to make sure I understand question/comment #119… In the case where a 
cluster without a metrics client receiver is later reconfigured and restarted 
to include a metrics client receiver, do we want the client to thereafter begin 
pushing metrics to the cluster? From Andrew’s response to question #119, it 
sounds like we’re using the presence/absence of the relevant RPCs in 
ApiVersionsResponse as the to-push-or-not-to-push indicator. Do I have that 
correct?

Thanks,
Kirk

> On Sep 21, 2023, at 7:42 AM, Andrew Schofield 
>  wrote:
> 
> Hi Jun,
> Thanks for your comments. I’ve updated the KIP to clarify where necessary.
> 
> 110. Yes, agree. The motivation section mentions this.
> 
> 111. The replacement of ‘-‘ with ‘.’ for metric names and the replacement of
> ‘-‘ with ‘_’ for attribute keys is following the OTLP guidelines. I think 
> it’s a bit
> of a debatable point. OTLP makes a distinction between a namespace and a
> multi-word component. If it was “client.id” then “client” would be a 
> namespace with
> an attribute key “id”. But “client_id” is just a key. So, it was intentional, 
> but debatable.
> 
> 112. Thanks. The link target moved. Fixed.
> 
> 113. Thanks. Fixed.
> 
> 114.1. If a standard metric makes sense for a client, it should use the exact 
> same
> name. If a standard metric doesn’t make sense for a client, then it can omit 
> that metric.
> 
> For a required metric, the situation is stronger. All clients must implement 
> these
> metrics with these names in order to implement the KIP. But the required 
> metrics
> are essentially the number of connections and the request latency, which do 
> not
> reference the underlying implementation of the client (which 
> producer.record.queue.time.max
> of course does).
> 
> I suppose someone might build a producer-only client that didn’t have 
> consumer metrics.
> In this case, the consumer metrics would conceptually have the value 0 and 
> would not
> need to be sent to the broker.
> 
> 114.2. If a client does not implement some metrics, they will not be 
> available for
> analysis and troubleshooting. It just makes the ability to combine metrics 
> from lots
> different clients less complete.
> 
> 115. I think it was probably a mistake to be so specific about threading in 
> this KIP.
> When the consumer threading refactor is complete, of course, it would do the 
> appropriate
> equivalent. I’ve added a clarification and massively simplified this section.
> 
> 116. I removed “client.terminating”.
> 
> 117. Yes. Horrid. Fixed.
> 
> 118. The Terminating flag just indicates that this is the final 
> PushTelemetryRequest
> from this client. Any subsequent request will be rejected. I think this flag 
> should remain.
> 
> 119. Good catch. This was actually contradicting another part of the KIP. The 
> current behaviour
> is indeed preserved. If the broker doesn’t have a client metrics receiver 
> plugin, the new RPCs
> in this KIP are “turned off” and not reported in ApiVersionsResponse. The 
> client will not
> attempt to push metrics.
> 
> 120. The error handling table lists the error codes for 
> PushTelemetryResponse. I’ve added one
> but it looked good to me. GetTelemetrySubscriptions doesn’t have any error 
> codes, since the
> situation in which the client telemetry is not supported is handled by the 
> RPCs not being offered
> by the broker.
> 
> 121. Again, I think it’s probably a mistake to be specific about threading. 
> Removed.
> 
> 122. Good catch. For DescribeConfigs, the ACL operation should be
> “DESCRIBE_CONFIGS”. For AlterConfigs, the ACL operation should be
> “ALTER” (not “WRITE” as it said). The checks are made on the CLUSTER
> resource.
> 
> Thanks for the detailed review.
> 
> Thanks,
> Andrew
> 
>> 
>> 110. Another potential motivation is the multiple clients support. Some of
>> the places may not have good monitoring support for non-java clients.
>> 
>> 111. OpenTelemetry Naming: We replace '-' with '.' for metric name and
>> replace '-' with '_' for attributes. Why is the inconsistency?
>> 
>> 112. OTLP specification: Page is not found from the link.
>> 
>> 113. "Defining standard and required metrics makes the monitoring and
>> troubleshooting of clients from various client types ": Incomplete sentence.
>> 
>> 114. standard/required metrics
>> 114.1 Do other clients need to implement those metrics with the exact same
>> names?
>> 114.2 What happens if some of those metrics are missing from a client?
>> 
>> 115. "KafkaConsumer: both the "heart beat" and application threads": We
>> have an ongoing effort to refactor the consumer threading model (
>> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+threading+refactor+design).
>> Once this is done, PRC requests will only be made from the background
>> thread. Should this KIP follow the new model only?
>> 
>> 116. 'The metrics should contain the reason for the client termination by
>> including the client.terminating metric with the label “reaso

Re: [ANNOUNCE] New committer: Lucas Brutschy

2023-09-22 Thread Mickael Maison
Congratulations Lucas!

On Fri, Sep 22, 2023 at 7:13 AM Luke Chen  wrote:
>
> Congratulations, Lukas!
>
> Luke
>
> On Fri, Sep 22, 2023 at 6:53 AM Tom Bentley  wrote:
>
> > Congratulations!
> >
> > On Fri, 22 Sept 2023 at 09:11, Sophie Blee-Goldman  > >
> > wrote:
> >
> > > Congrats Lucas!
> > >
> >


Re: [ANNOUNCE] New committer: Yash Mayya

2023-09-22 Thread Mickael Maison
Congratulations Yash!

On Fri, Sep 22, 2023 at 9:25 AM Chaitanya Mukka
 wrote:
>
> Congrats, Yash!! Well deserved.
>
> Chaitanya Mukka
> On 21 Sep 2023 at 8:58 PM +0530, Bruno Cadonna , wrote:
> > Hi all,
> >
> > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> > Yash Mayya.
> >
> > Yash's major contributions are around Connect.
> >
> > Yash authored the following KIPs:
> >
> > KIP-793: Allow sink connectors to be used with topic-mutating SMTs
> > KIP-882: Kafka Connect REST API configuration validation timeout
> > improvements
> > KIP-970: Deprecate and remove Connect's redundant task configurations
> > endpoint
> > KIP-980: Allow creating connectors in a stopped state
> >
> > Overall, Yash is known for insightful and friendly input to discussions
> > and his high quality contributions.
> >
> > Congratulations, Yash!
> >
> > Thanks,
> >
> > Bruno (on behalf of the Apache Kafka PMC)


[GitHub] [kafka-site] yashmayya merged pull request #550: MINOR: Add Yash Mayya to committers

2023-09-22 Thread via GitHub


yashmayya merged PR #550:
URL: https://github.com/apache/kafka-site/pull/550


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] yashmayya commented on pull request #550: MINOR: Add Yash Mayya to committers

2023-09-22 Thread via GitHub


yashmayya commented on PR #550:
URL: https://github.com/apache/kafka-site/pull/550#issuecomment-1731773427

   Thanks Chris!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] KIP-714: Client metrics and observability

2023-09-22 Thread Andrew Schofield
Hi Kirk,
Thanks for your question. You are correct that the presence or absence of the 
new RPCs in the
ApiVersionsResponse tells the client whether to request the telemetry 
subscriptions and push
metrics.

This is of course tricky in practice. It would be conceivable, as a cluster is 
upgraded to AK 3.7
or as a client metrics receiver plugin is deployed across the cluster, that a 
client connects to some
brokers that support the new RPCs and some that do not.

Here’s my suggestion:
* If a client is not connected to any brokers that support in the new RPCs, it 
cannot push metrics.
* If a client is only connected to brokers that support the new RPCs, it will 
use the new RPCs in
accordance with the KIP.
* If a client is connected to some brokers that support the new RPCs and some 
that do not, it will
use the new RPCs with the supporting subset of brokers in accordance with the 
KIP.

Comments?

Thanks,
Andrew

> On 22 Sep 2023, at 16:01, Kirk True  wrote:
>
> Hi Andrew/Jun,
>
> I want to make sure I understand question/comment #119… In the case where a 
> cluster without a metrics client receiver is later reconfigured and restarted 
> to include a metrics client receiver, do we want the client to thereafter 
> begin pushing metrics to the cluster? From Andrew’s response to question 
> #119, it sounds like we’re using the presence/absence of the relevant RPCs in 
> ApiVersionsResponse as the to-push-or-not-to-push indicator. Do I have that 
> correct?
>
> Thanks,
> Kirk
>
>> On Sep 21, 2023, at 7:42 AM, Andrew Schofield 
>>  wrote:
>>
>> Hi Jun,
>> Thanks for your comments. I’ve updated the KIP to clarify where necessary.
>>
>> 110. Yes, agree. The motivation section mentions this.
>>
>> 111. The replacement of ‘-‘ with ‘.’ for metric names and the replacement of
>> ‘-‘ with ‘_’ for attribute keys is following the OTLP guidelines. I think 
>> it’s a bit
>> of a debatable point. OTLP makes a distinction between a namespace and a
>> multi-word component. If it was “client.id” then “client” would be a 
>> namespace with
>> an attribute key “id”. But “client_id” is just a key. So, it was 
>> intentional, but debatable.
>>
>> 112. Thanks. The link target moved. Fixed.
>>
>> 113. Thanks. Fixed.
>>
>> 114.1. If a standard metric makes sense for a client, it should use the 
>> exact same
>> name. If a standard metric doesn’t make sense for a client, then it can omit 
>> that metric.
>>
>> For a required metric, the situation is stronger. All clients must implement 
>> these
>> metrics with these names in order to implement the KIP. But the required 
>> metrics
>> are essentially the number of connections and the request latency, which do 
>> not
>> reference the underlying implementation of the client (which 
>> producer.record.queue.time.max
>> of course does).
>>
>> I suppose someone might build a producer-only client that didn’t have 
>> consumer metrics.
>> In this case, the consumer metrics would conceptually have the value 0 and 
>> would not
>> need to be sent to the broker.
>>
>> 114.2. If a client does not implement some metrics, they will not be 
>> available for
>> analysis and troubleshooting. It just makes the ability to combine metrics 
>> from lots
>> different clients less complete.
>>
>> 115. I think it was probably a mistake to be so specific about threading in 
>> this KIP.
>> When the consumer threading refactor is complete, of course, it would do the 
>> appropriate
>> equivalent. I’ve added a clarification and massively simplified this section.
>>
>> 116. I removed “client.terminating”.
>>
>> 117. Yes. Horrid. Fixed.
>>
>> 118. The Terminating flag just indicates that this is the final 
>> PushTelemetryRequest
>> from this client. Any subsequent request will be rejected. I think this flag 
>> should remain.
>>
>> 119. Good catch. This was actually contradicting another part of the KIP. 
>> The current behaviour
>> is indeed preserved. If the broker doesn’t have a client metrics receiver 
>> plugin, the new RPCs
>> in this KIP are “turned off” and not reported in ApiVersionsResponse. The 
>> client will not
>> attempt to push metrics.
>>
>> 120. The error handling table lists the error codes for 
>> PushTelemetryResponse. I’ve added one
>> but it looked good to me. GetTelemetrySubscriptions doesn’t have any error 
>> codes, since the
>> situation in which the client telemetry is not supported is handled by the 
>> RPCs not being offered
>> by the broker.
>>
>> 121. Again, I think it’s probably a mistake to be specific about threading. 
>> Removed.
>>
>> 122. Good catch. For DescribeConfigs, the ACL operation should be
>> “DESCRIBE_CONFIGS”. For AlterConfigs, the ACL operation should be
>> “ALTER” (not “WRITE” as it said). The checks are made on the CLUSTER
>> resource.
>>
>> Thanks for the detailed review.
>>
>> Thanks,
>> Andrew
>>
>>>
>>> 110. Another potential motivation is the multiple clients support. Some of
>>> the places may not have good monitoring support for non-java clients.
>>>
>

[DISCUSS] KIP-982: Access SslPrincipalMapper and kerberosShortNamer in Custom KafkaPrincipalBuilder

2023-09-22 Thread Raghu B
Hi everyone,

I would like to start the discussion on the KIP-982 to Access
SslPrincipalMapper and kerberosShortNamer in Custom KafkaPrincipalBuilder
https://cwiki.apache.org/confluence/display/KAFKA/KIP-982%3A+Access+SslPrincipalMapper+and+kerberosShortNamer+in+Custom+KafkaPrincipalBuilder

Looking forward to your feedback!

Thanks,
Raghu


Re: [DISCUSS] KIP-714: Client metrics and observability

2023-09-22 Thread Philip Nee
Hi Andrew -

Question on top of your answers: Do you think the client should actively
search for a broker that supports this RPC? As previously mentioned, the
broker uses the leastLoadedNode to find its first connection (am
I correct?), and what if that broker doesn't support the metric push?

P

On Fri, Sep 22, 2023 at 10:20 AM Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:

> Hi Kirk,
> Thanks for your question. You are correct that the presence or absence of
> the new RPCs in the
> ApiVersionsResponse tells the client whether to request the telemetry
> subscriptions and push
> metrics.
>
> This is of course tricky in practice. It would be conceivable, as a
> cluster is upgraded to AK 3.7
> or as a client metrics receiver plugin is deployed across the cluster,
> that a client connects to some
> brokers that support the new RPCs and some that do not.
>
> Here’s my suggestion:
> * If a client is not connected to any brokers that support in the new
> RPCs, it cannot push metrics.
> * If a client is only connected to brokers that support the new RPCs, it
> will use the new RPCs in
> accordance with the KIP.
> * If a client is connected to some brokers that support the new RPCs and
> some that do not, it will
> use the new RPCs with the supporting subset of brokers in accordance with
> the KIP.
>
> Comments?
>
> Thanks,
> Andrew
>
> > On 22 Sep 2023, at 16:01, Kirk True  wrote:
> >
> > Hi Andrew/Jun,
> >
> > I want to make sure I understand question/comment #119… In the case
> where a cluster without a metrics client receiver is later reconfigured and
> restarted to include a metrics client receiver, do we want the client to
> thereafter begin pushing metrics to the cluster? From Andrew’s response to
> question #119, it sounds like we’re using the presence/absence of the
> relevant RPCs in ApiVersionsResponse as the to-push-or-not-to-push
> indicator. Do I have that correct?
> >
> > Thanks,
> > Kirk
> >
> >> On Sep 21, 2023, at 7:42 AM, Andrew Schofield <
> andrew_schofield_j...@outlook.com> wrote:
> >>
> >> Hi Jun,
> >> Thanks for your comments. I’ve updated the KIP to clarify where
> necessary.
> >>
> >> 110. Yes, agree. The motivation section mentions this.
> >>
> >> 111. The replacement of ‘-‘ with ‘.’ for metric names and the
> replacement of
> >> ‘-‘ with ‘_’ for attribute keys is following the OTLP guidelines. I
> think it’s a bit
> >> of a debatable point. OTLP makes a distinction between a namespace and a
> >> multi-word component. If it was “client.id” then “client” would be a
> namespace with
> >> an attribute key “id”. But “client_id” is just a key. So, it was
> intentional, but debatable.
> >>
> >> 112. Thanks. The link target moved. Fixed.
> >>
> >> 113. Thanks. Fixed.
> >>
> >> 114.1. If a standard metric makes sense for a client, it should use the
> exact same
> >> name. If a standard metric doesn’t make sense for a client, then it can
> omit that metric.
> >>
> >> For a required metric, the situation is stronger. All clients must
> implement these
> >> metrics with these names in order to implement the KIP. But the
> required metrics
> >> are essentially the number of connections and the request latency,
> which do not
> >> reference the underlying implementation of the client (which
> producer.record.queue.time.max
> >> of course does).
> >>
> >> I suppose someone might build a producer-only client that didn’t have
> consumer metrics.
> >> In this case, the consumer metrics would conceptually have the value 0
> and would not
> >> need to be sent to the broker.
> >>
> >> 114.2. If a client does not implement some metrics, they will not be
> available for
> >> analysis and troubleshooting. It just makes the ability to combine
> metrics from lots
> >> different clients less complete.
> >>
> >> 115. I think it was probably a mistake to be so specific about
> threading in this KIP.
> >> When the consumer threading refactor is complete, of course, it would
> do the appropriate
> >> equivalent. I’ve added a clarification and massively simplified this
> section.
> >>
> >> 116. I removed “client.terminating”.
> >>
> >> 117. Yes. Horrid. Fixed.
> >>
> >> 118. The Terminating flag just indicates that this is the final
> PushTelemetryRequest
> >> from this client. Any subsequent request will be rejected. I think this
> flag should remain.
> >>
> >> 119. Good catch. This was actually contradicting another part of the
> KIP. The current behaviour
> >> is indeed preserved. If the broker doesn’t have a client metrics
> receiver plugin, the new RPCs
> >> in this KIP are “turned off” and not reported in ApiVersionsResponse.
> The client will not
> >> attempt to push metrics.
> >>
> >> 120. The error handling table lists the error codes for
> PushTelemetryResponse. I’ve added one
> >> but it looked good to me. GetTelemetrySubscriptions doesn’t have any
> error codes, since the
> >> situation in which the client telemetry is not supported is handled by
> the RPCs not being offered
> >> by the brok

Re: [DISCUSS] KIP-714: Client metrics and observability

2023-09-22 Thread Andrew Schofield
Hi Philip,
No, I do not think it should actively search for a broker that supports the new
RPCs. In general, either all of the brokers or none of the brokers will support 
it.
In the window, where the cluster is being upgraded or client telemetry is being
enabled, there might be a mixed situation. I wouldn’t put too much effort into
this mixed scenario. As the client finds brokers which support the new RPCs,
it can begin to follow the KIP-714 mechanism.

Thanks,
Andrew

> On 22 Sep 2023, at 20:01, Philip Nee  wrote:
>
> Hi Andrew -
>
> Question on top of your answers: Do you think the client should actively
> search for a broker that supports this RPC? As previously mentioned, the
> broker uses the leastLoadedNode to find its first connection (am
> I correct?), and what if that broker doesn't support the metric push?
>
> P
>
> On Fri, Sep 22, 2023 at 10:20 AM Andrew Schofield <
> andrew_schofield_j...@outlook.com> wrote:
>
>> Hi Kirk,
>> Thanks for your question. You are correct that the presence or absence of
>> the new RPCs in the
>> ApiVersionsResponse tells the client whether to request the telemetry
>> subscriptions and push
>> metrics.
>>
>> This is of course tricky in practice. It would be conceivable, as a
>> cluster is upgraded to AK 3.7
>> or as a client metrics receiver plugin is deployed across the cluster,
>> that a client connects to some
>> brokers that support the new RPCs and some that do not.
>>
>> Here’s my suggestion:
>> * If a client is not connected to any brokers that support in the new
>> RPCs, it cannot push metrics.
>> * If a client is only connected to brokers that support the new RPCs, it
>> will use the new RPCs in
>> accordance with the KIP.
>> * If a client is connected to some brokers that support the new RPCs and
>> some that do not, it will
>> use the new RPCs with the supporting subset of brokers in accordance with
>> the KIP.
>>
>> Comments?
>>
>> Thanks,
>> Andrew
>>
>>> On 22 Sep 2023, at 16:01, Kirk True  wrote:
>>>
>>> Hi Andrew/Jun,
>>>
>>> I want to make sure I understand question/comment #119… In the case
>> where a cluster without a metrics client receiver is later reconfigured and
>> restarted to include a metrics client receiver, do we want the client to
>> thereafter begin pushing metrics to the cluster? From Andrew’s response to
>> question #119, it sounds like we’re using the presence/absence of the
>> relevant RPCs in ApiVersionsResponse as the to-push-or-not-to-push
>> indicator. Do I have that correct?
>>>
>>> Thanks,
>>> Kirk
>>>
 On Sep 21, 2023, at 7:42 AM, Andrew Schofield <
>> andrew_schofield_j...@outlook.com> wrote:

 Hi Jun,
 Thanks for your comments. I’ve updated the KIP to clarify where
>> necessary.

 110. Yes, agree. The motivation section mentions this.

 111. The replacement of ‘-‘ with ‘.’ for metric names and the
>> replacement of
 ‘-‘ with ‘_’ for attribute keys is following the OTLP guidelines. I
>> think it’s a bit
 of a debatable point. OTLP makes a distinction between a namespace and a
 multi-word component. If it was “client.id” then “client” would be a
>> namespace with
 an attribute key “id”. But “client_id” is just a key. So, it was
>> intentional, but debatable.

 112. Thanks. The link target moved. Fixed.

 113. Thanks. Fixed.

 114.1. If a standard metric makes sense for a client, it should use the
>> exact same
 name. If a standard metric doesn’t make sense for a client, then it can
>> omit that metric.

 For a required metric, the situation is stronger. All clients must
>> implement these
 metrics with these names in order to implement the KIP. But the
>> required metrics
 are essentially the number of connections and the request latency,
>> which do not
 reference the underlying implementation of the client (which
>> producer.record.queue.time.max
 of course does).

 I suppose someone might build a producer-only client that didn’t have
>> consumer metrics.
 In this case, the consumer metrics would conceptually have the value 0
>> and would not
 need to be sent to the broker.

 114.2. If a client does not implement some metrics, they will not be
>> available for
 analysis and troubleshooting. It just makes the ability to combine
>> metrics from lots
 different clients less complete.

 115. I think it was probably a mistake to be so specific about
>> threading in this KIP.
 When the consumer threading refactor is complete, of course, it would
>> do the appropriate
 equivalent. I’ve added a clarification and massively simplified this
>> section.

 116. I removed “client.terminating”.

 117. Yes. Horrid. Fixed.

 118. The Terminating flag just indicates that this is the final
>> PushTelemetryRequest
 from this client. Any subsequent request will be rejected. I think this
>> flag should remain.

 119. Good catch. This was actually contradicting anot

[jira] [Created] (KAFKA-15488) Add StarRocks to the database integration list

2023-09-22 Thread Albert Wong (Jira)
Albert Wong created KAFKA-15488:
---

 Summary:  Add StarRocks to the database integration list
 Key: KAFKA-15488
 URL: https://issues.apache.org/jira/browse/KAFKA-15488
 Project: Kafka
  Issue Type: Improvement
Reporter: Albert Wong


On [https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem,] I'd like to 
add StarRocks to the list of database integrations. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2223

2023-09-22 Thread Apache Jenkins Server
See 




Re: question

2023-09-22 Thread Luke Chen
Hi 殿杰

In short, we don't support it now.
But welcome to submit a PR to fix the gap.
You can check this ticket for more information:
https://issues.apache.org/jira/browse/KAFKA-7025

Thanks.
Luke

On Sat, Sep 23, 2023 at 2:14 AM shidian...@mxnavi.com 
wrote:

>
> hello,
>
> I‘m working inKafika development. Now,I have a question. Does Kafka
> support Android client?
>
>
>
>
> 石殿杰
> 技术中心
> 邮箱:shidian...@mxnavi.com
> 电话:18341724011
>


[ANNOUNCE] New Kafka PMC Member: Justine Olshan

2023-09-22 Thread Luke Chen
Hi, Everyone,

Justine Olshan has been a Kafka committer since Dec. 2022. She has been
very active and instrumental to the community since becoming a committer.
It's my pleasure to announce that Justine is now a member of Kafka PMC.

Congratulations Justine!

Luke
on behalf of Apache Kafka PMC


Re: [ANNOUNCE] New Kafka PMC Member: Justine Olshan

2023-09-22 Thread Philip Nee
Congrats Justine!

On Fri, Sep 22, 2023 at 7:07 PM Luke Chen  wrote:

> Hi, Everyone,
>
> Justine Olshan has been a Kafka committer since Dec. 2022. She has been
> very active and instrumental to the community since becoming a committer.
> It's my pleasure to announce that Justine is now a member of Kafka PMC.
>
> Congratulations Justine!
>
> Luke
> on behalf of Apache Kafka PMC
>


Re: [ANNOUNCE] New Kafka PMC Member: Justine Olshan

2023-09-22 Thread Tzu-Li (Gordon) Tai
Congratulations Justine!

On Fri, Sep 22, 2023, 19:25 Philip Nee  wrote:

> Congrats Justine!
>
> On Fri, Sep 22, 2023 at 7:07 PM Luke Chen  wrote:
>
> > Hi, Everyone,
> >
> > Justine Olshan has been a Kafka committer since Dec. 2022. She has been
> > very active and instrumental to the community since becoming a committer.
> > It's my pleasure to announce that Justine is now a member of Kafka PMC.
> >
> > Congratulations Justine!
> >
> > Luke
> > on behalf of Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Justine Olshan

2023-09-22 Thread Guozhang Wang
Congratulations!

On Fri, Sep 22, 2023 at 8:44 PM Tzu-Li (Gordon) Tai  wrote:
>
> Congratulations Justine!
>
> On Fri, Sep 22, 2023, 19:25 Philip Nee  wrote:
>
> > Congrats Justine!
> >
> > On Fri, Sep 22, 2023 at 7:07 PM Luke Chen  wrote:
> >
> > > Hi, Everyone,
> > >
> > > Justine Olshan has been a Kafka committer since Dec. 2022. She has been
> > > very active and instrumental to the community since becoming a committer.
> > > It's my pleasure to announce that Justine is now a member of Kafka PMC.
> > >
> > > Congratulations Justine!
> > >
> > > Luke
> > > on behalf of Apache Kafka PMC
> > >
> >


Re: [ANNOUNCE] New Kafka PMC Member: Justine Olshan

2023-09-22 Thread Chris Egerton
Congrats Justine!
On Fri, Sep 22, 2023, 20:47 Guozhang Wang 
wrote:

> Congratulations!
>
> On Fri, Sep 22, 2023 at 8:44 PM Tzu-Li (Gordon) Tai 
> wrote:
> >
> > Congratulations Justine!
> >
> > On Fri, Sep 22, 2023, 19:25 Philip Nee  wrote:
> >
> > > Congrats Justine!
> > >
> > > On Fri, Sep 22, 2023 at 7:07 PM Luke Chen  wrote:
> > >
> > > > Hi, Everyone,
> > > >
> > > > Justine Olshan has been a Kafka committer since Dec. 2022. She has
> been
> > > > very active and instrumental to the community since becoming a
> committer.
> > > > It's my pleasure to announce that Justine is now a member of Kafka
> PMC.
> > > >
> > > > Congratulations Justine!
> > > >
> > > > Luke
> > > > on behalf of Apache Kafka PMC
> > > >
> > >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2224

2023-09-22 Thread Apache Jenkins Server
See