Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1972

2023-07-05 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15150) Add ServiceLoaderScanner implementation

2023-07-05 Thread Greg Harris (Jira)
Greg Harris created KAFKA-15150:
---

 Summary: Add ServiceLoaderScanner implementation
 Key: KAFKA-15150
 URL: https://issues.apache.org/jira/browse/KAFKA-15150
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Reporter: Greg Harris
Assignee: Greg Harris






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15149) Fix not sending RPCs in dual-write mode when there are new partitions

2023-07-05 Thread Andrew Grant (Jira)
Andrew Grant created KAFKA-15149:


 Summary: Fix not sending RPCs in dual-write mode when there are 
new partitions
 Key: KAFKA-15149
 URL: https://issues.apache.org/jira/browse/KAFKA-15149
 Project: Kafka
  Issue Type: Bug
Reporter: Andrew Grant






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1971

2023-07-05 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.5 #29

2023-07-05 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15148) Some integration tests running are running unit tests

2023-07-05 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-15148:


 Summary: Some integration tests running are running unit tests
 Key: KAFKA-15148
 URL: https://issues.apache.org/jira/browse/KAFKA-15148
 Project: Kafka
  Issue Type: Test
Reporter: Divij Vaidya


*This is a good item for a newcomer into Kafka code base to pick up*

 

When we run `./gradlew unitTest`, it is supposed to run all unit tests. 
However, we are running some integration tests as part of which makes the 
overall process of running unitTest take longer than expected.

Example of such tests:


> :streams:unitTest > Executing test 
> org.apache...integration.NamedTopologyIntegrationTest

> :streams:unitTest > Executing test 
> org.apache...integration.QueryableStateIntegrationTest


After this task, we should not run the these tests as part of `./gradlew 
unitTest`, instead they should be run as part of `./gradlew integrationTest`.

As part of acceptance criteria, please add the snapshot of html summary 
generated to verify that these tests are indeed running as part of 
integrationTest.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15114) StorageTool help specifies user as parameter not name

2023-07-05 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-15114.
--
  Reviewer: Colin McCabe
Resolution: Fixed

> StorageTool help specifies user as parameter not name
> -
>
> Key: KAFKA-15114
> URL: https://issues.apache.org/jira/browse/KAFKA-15114
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.5.0
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Minor
> Fix For: 3.6.0, 3.5.1
>
>
> StorageTool help message current specifies setting a {{user}} parameter when 
> creating a SCRAM record for bootstrap.
> The StorageTool parses and only accepts the parameter as {{name}} and so the 
> help message is wrong.
> The choice of using {{name}} vs. {{user}} as a parameter is because 
> internally the record uses name, all tests using the StorageTool use name as 
> a parameter, KafkaPrincipals are created with {{name}} and because creating 
> SCRAM credentials is done with {{--entity-name}}
> I will change the help to specify {{name}} instead of {{user}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-940: Broker extension point for validating record contents at produce time

2023-07-05 Thread Edoardo Comar
Hi Jorge!

On Fri, 30 Jun 2023 at 15:47, Jorge Esteban Quilcate Otoya
 wrote:
>
> Thank you both for the replies! A couple more comments:
>
> The current proposal is to have ‘record.validation.policy’ per topic
> (default null). A flag would be something like
> ‘record.validation.policy.enable’ (default=false) may be simpler to
> configure from the user perspective.
>
> Also, at the moment, is a bit unclear to me what value the topic config
> ‘record.validation.policy’ should contain: is the policy class name? How is
> the policy expected to use the name received?
>

The 'record.validation.policy' will typically contain a value that is
meaningful to the policy implementation.
For example, a schema registry might support different strategies to
associate a schema with a topic.
The policy could use this property to determine which strategy is in
use and then evaluate whether the record is valid.
We decided to reserve the 'null' value to mean disable validation for
this topic to avoid the need for introducing a second inter-dependent
boolean property.

>
> Thanks! I think adding a simple example of a Policy implementation and how
> plugin developer may use this hints (and metadata as well) may bring some
> clarity to the proposal.
>

We've added a sample to the KIP, hope this helps.

We expect the RecordIntrospectionHints to be a declaration the policy makes,
which the implementation of the KIP may use to optimise record
iteration avoiding a full decompression in the case where a message is
received with compression type matching the topic compression config.
Currently Kafka optimizes that case by supplying an iterator that does
not provide access to the record data, only answers hasKey/hasValue
checks.

HTH,
best
Edo & Adrian


Re: [DISCUSS] KIP-910: Update Source offsets for Source Connectors without producing records

2023-07-05 Thread Chris Egerton
Hi Sagar,

Thanks for updating the KIP! The latest draft seems simpler and more
focused, which I think is a win for users and developers alike. Here are my
thoughts on the current draft:

1. (Nit) Can we move the "Public Interfaces" section before the "Proposed
Changes" section? It's nice to have a summary of the user/developer-facing
changes first since that answers many of the questions that I had while
reading the "Proposed Changes" section. I'd bet that this is also why we
use that ordering in the KIP template.

2. Why are we invoking SourceTask::updateOffsets so frequently when
exactly-once support is disabled? Wouldn't it be simpler both for our
implementation and for connector developers if we only invoked it directly
before committing offsets, instead of potentially several times between
offset commits, especially since that would also mirror the behavior with
exactly-once support enabled?

3. Building off of point 2, we wouldn't need to specify any more detail
than that "SourceTask::updateOffsets will be invoked directly before
committing offsets, with the to-be-committed offsets". There would be no
need to distinguish between when exactly-once support is enabled or
disabled.

4. Some general stylistic feedback: we shouldn't mention the names of
internal classes or methods in KIPs. KIPS are for discussing high-level
design proposals. Internal names and APIS may change over time, and are not
very helpful to readers who are not already familiar with the code base.
Instead, we should describe changes in behavior, not code.

5. Why return a complete map of to-be-committed offsets instead of a map of
just the offsets that the connector wants to change? This seems especially
intuitive since we automatically re-insert source partitions that have been
removed by the connector.

6. I don't think we don't need to return an Optional from
SourceTask::updateOffsets. Developers can return null instead of
Optional.empty(), and since the framework will have to handle null return
values either way, this would reduce the number of cases for us to handle
from three (Optional.of(...), Optional.empty(), null) to two (null,
non-null).

7. Why disallow tombstone records? If an upstream resource disappears, then
wouldn't a task want to emit a tombstone record without having to also emit
an accompanying source record? This could help prevent an
infinitely-growing offsets topic, although with KIP-875 coming out in the
next release, perhaps we can leave this out for now and let Connect users
and cluster administrators do this work manually instead of letting
connector developers automate it.

8. Is the information on multiple offsets topics for exactly-once
connectors relevant to this KIP? If not, we should remove it.

9. It seems like most of the use cases that motivate this KIP only require
being able to add a new source partition/source offset pair to the
to-be-committed offsets. Do we need to allow connector developers to modify
source offsets for already-present source partitions at all? If we reduce
the surface of the API, then the worst case is still just that the offsets
we commit are at most one commit out-of-date.

10. (Nit) The "Motivation" section states that "offsets are written
periodically by the connect framework to an offsets topic". This is only
true in distributed mode; in standalone mode, we write offsets to a local
file.

Cheers,

Chris

On Tue, Jul 4, 2023 at 8:42 AM Yash Mayya  wrote:

> Hi Sagar,
>
> Thanks for your continued work on this KIP! Here are my thoughts on your
> updated proposal:
>
> 1) In the proposed changes section where you talk about modifying the
> offsets, could you please clarify that tasks shouldn't modify the offsets
> map that is passed as an argument? Currently, the distinction between the
> offsets map passed as an argument and the offsets map that is returned is
> not very clear in numerous places.
>
> 2) The default return value of Optional.empty() seems to be fairly
> non-intuitive considering that the return value is supposed to be the
> offsets that are to be committed. Can we consider simply returning the
> offsets argument itself by default instead?
>
> 3) The KIP states that "It is also possible that a task might choose to
> send a tombstone record as an offset. This is not recommended and to
> prevent connectors shooting themselves in the foot due to this" - could you
> please clarify why this is not recommended / supported?
>
> 4) The KIP states that "If a task returns an Optional of a null object or
> an Optional of an empty map, even for such cases the behaviour would would
> be disabled." - since this is an optional API that source task
> implementations don't necessarily need to implement, I don't think I fully
> follow why the return type of the proposed "updateOffsets" method is an
> Optional? Can we not simply use the Map as the return type instead?
>
> 5) The KIP states that "The offsets passed to the updateOffsets  method
> would be the offset from the 

[jira] [Created] (KAFKA-15147) Measure pending and outstanding Remote Segment operations

2023-07-05 Thread Jorge Esteban Quilcate Otoya (Jira)
Jorge Esteban Quilcate Otoya created KAFKA-15147:


 Summary: Measure pending and outstanding Remote Segment operations
 Key: KAFKA-15147
 URL: https://issues.apache.org/jira/browse/KAFKA-15147
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Jorge Esteban Quilcate Otoya


Remote Log Segment operations (copy/delete) are executed by the Remote Storage 
Manager, and recorded by Remote Log Metadata Manager (e.g. default 
TopicBasedRLMM writes to the internal Kafka topic state changes on remote log 
segments).

As executions run, fail, and retry; it will be important to know how many 
operations are pending and outstanding over time to alert operators.

Pending operations are not enough to alert, as values can oscillate closer to 
zero. An additional condition needs to apply (running time > threshold) to 
consider an operation outstanding.

Proposal:

RemoteLogManager could be extended with 2 concurrent maps 
(pendingSegmentCopies, pendingSegmentDeletes) `Map[Uuid, Long]` to measure 
segmentId time when operation started, and based on this expose 2 metrics per 
operation:
 * pendingSegmentCopies: gauge of pendingSegmentCopies map
 * outstandingSegmentCopies: loop over pending ops, and if now - startedTime > 
timeout, then outstanding++ (maybe on debug level?)

Is this a valuable metric to add to Tiered Storage? or better to solve on a 
custom RLMM implementation?

Also, does it require a KIP?

Thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-933 Publish metrics when source connector fails to poll data

2023-07-05 Thread Sagar
Hi Ravindra,

One minor thing, the discussion thread URL that you had provided points to
an incorrect page. Can you plz update it to this (
https://www.mail-archive.com/dev@kafka.apache.org/msg131894.html)?

Thanks!
Sagar.

On Sun, Jul 2, 2023 at 12:06 AM Ravindra Nath Kakarla <
ravindhran...@gmail.com> wrote:

> Thanks for reviewing and providing the feedback.
>
> > 1) Does it make sense to drop the *record *part from the metric name as
> it
> doesn't seem to serve much purpose? I would rather call the metric as
> *source-poll-errors-total
>
> Yes, "records" is not needed and misleading.
>
> > Staying on names, I am thinking, does it make more sense to have
> *failures* in the name instead of *errors *i.e.*
> source-poll-failures-total* and
> *source-poll-failures-rate*? What do you think?
>
> Agree, "failures" is a more appropriate term here.
>
> > Regarding the inclusion of retriable exceptions, as of today, source
> tasks don't retry even in cases of RetriableException. A PR was created to
> modify this behaviour (https://github.com/apache/kafka/pull/13726) but the
> reason I bring it up is that in that PR, the failures etc for retry context
> would be computed from the RetryWithToleranceOperator. I am not sure when
> would that get merged, but does it change the failure counting logic in any
> ways?
>
> In my opinion, we should ignore retryable exceptions when SourceTasks
> switches to using RetryWithToleranceOperator. I can update the KIP to call
> this out. If the PR for this KIP is implemented first, we can include both
> retriable and non-retriable exceptions. I can also add a comment on
> https://github.com/apache/kafka/pull/13726 to remove them. What do you
> think?
>
> Thank you
>
>
> On Wed, Jun 28, 2023 at 1:09 PM Sagar  wrote:
>
> > Hey Ravindra,
> >
> > Thanks for the KIP! It appears to be a useful addition to the metrics to
> > understand poll related failures which can go untracked as of now. I just
> > have a couple of minor comments:
> >
> > 1) Does it make sense to drop the *record *part from the metric name as
> it
> > doesn't seem to serve much purpose? I would rather call the metric as
> > *source-poll-errors-total
> > *and *source-poll-errors-rate*.
> > 2) Staying on names, I am thinking, does it make more sense to have
> > *failures* in the name instead of *errors *i.e.*
> > source-poll-failures-total* and
> > *source-poll-failures-rate*? What do you think?
> > 3) Regarding the inclusion of retriable exceptions, as of today, source
> > tasks don't retry even in cases of RetriableException. A PR was created
> to
> > modify this behaviour (https://github.com/apache/kafka/pull/13726) but
> the
> > reason I bring it up is that in that PR, the failures etc for retry
> context
> > would be computed from the RetryWithToleranceOperator. I am not sure when
> > would that get merged, but does it change the failure counting logic in
> any
> > ways?
> >
> > Thanks!
> > Sagar.
> >
> >
> > On Sun, Jun 25, 2023 at 12:40 AM Ravindra Nath Kakarla <
> > ravindhran...@gmail.com> wrote:
> >
> > > Hello,
> > >
> > > I would like to start a discussion on KIP-933 to add new metrics to
> Kafka
> > > Connect that helps  monitoring polling failures with source connectors.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-933%3A+Publish+metrics+when+source+connector+fails+to+poll+data
> > >
> > > Looking forward to feedback on this.
> > >
> > > Thank you,
> > > Ravindranath
> > >
> >
>


Re: [DISCUSS] KIP-933 Publish metrics when source connector fails to poll data

2023-07-05 Thread Sagar
Hi Ravindra,

When you say

we should ignore retryable exceptions when SourceTasks switches to using
> RetryWithToleranceOperator.


you mean the metrics computation should be avoided?

Now that I think about it, it might be better to keep the PR and the KIP
separated from each other. For now, because there are no retries via
RetryToleranceOperator, if poll() fails, we can just count it as a poll
failure for both retriable and non-retriable(as you pointed out).

Let me know what you think.

Thanks!
Sagar.


On Sun, Jul 2, 2023 at 12:06 AM Ravindra Nath Kakarla <
ravindhran...@gmail.com> wrote:

> Thanks for reviewing and providing the feedback.
>
> > 1) Does it make sense to drop the *record *part from the metric name as
> it
> doesn't seem to serve much purpose? I would rather call the metric as
> *source-poll-errors-total
>
> Yes, "records" is not needed and misleading.
>
> > Staying on names, I am thinking, does it make more sense to have
> *failures* in the name instead of *errors *i.e.*
> source-poll-failures-total* and
> *source-poll-failures-rate*? What do you think?
>
> Agree, "failures" is a more appropriate term here.
>
> > Regarding the inclusion of retriable exceptions, as of today, source
> tasks don't retry even in cases of RetriableException. A PR was created to
> modify this behaviour (https://github.com/apache/kafka/pull/13726) but the
> reason I bring it up is that in that PR, the failures etc for retry context
> would be computed from the RetryWithToleranceOperator. I am not sure when
> would that get merged, but does it change the failure counting logic in any
> ways?
>
> In my opinion, we should ignore retryable exceptions when SourceTasks
> switches to using RetryWithToleranceOperator. I can update the KIP to call
> this out. If the PR for this KIP is implemented first, we can include both
> retriable and non-retriable exceptions. I can also add a comment on
> https://github.com/apache/kafka/pull/13726 to remove them. What do you
> think?
>
> Thank you
>
>
> On Wed, Jun 28, 2023 at 1:09 PM Sagar  wrote:
>
> > Hey Ravindra,
> >
> > Thanks for the KIP! It appears to be a useful addition to the metrics to
> > understand poll related failures which can go untracked as of now. I just
> > have a couple of minor comments:
> >
> > 1) Does it make sense to drop the *record *part from the metric name as
> it
> > doesn't seem to serve much purpose? I would rather call the metric as
> > *source-poll-errors-total
> > *and *source-poll-errors-rate*.
> > 2) Staying on names, I am thinking, does it make more sense to have
> > *failures* in the name instead of *errors *i.e.*
> > source-poll-failures-total* and
> > *source-poll-failures-rate*? What do you think?
> > 3) Regarding the inclusion of retriable exceptions, as of today, source
> > tasks don't retry even in cases of RetriableException. A PR was created
> to
> > modify this behaviour (https://github.com/apache/kafka/pull/13726) but
> the
> > reason I bring it up is that in that PR, the failures etc for retry
> context
> > would be computed from the RetryWithToleranceOperator. I am not sure when
> > would that get merged, but does it change the failure counting logic in
> any
> > ways?
> >
> > Thanks!
> > Sagar.
> >
> >
> > On Sun, Jun 25, 2023 at 12:40 AM Ravindra Nath Kakarla <
> > ravindhran...@gmail.com> wrote:
> >
> > > Hello,
> > >
> > > I would like to start a discussion on KIP-933 to add new metrics to
> Kafka
> > > Connect that helps  monitoring polling failures with source connectors.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-933%3A+Publish+metrics+when+source+connector+fails+to+poll+data
> > >
> > > Looking forward to feedback on this.
> > >
> > > Thank you,
> > > Ravindranath
> > >
> >
>


[GitHub] [kafka-site] fvaleri commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


fvaleri commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252964157


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited

Review Comment:
   Thanks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] divijvaidya merged pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


divijvaidya merged PR #531:
URL: https://github.com/apache/kafka-site/pull/531


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] mimaison commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


mimaison commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252930154


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited
+by sending a malicious payload in the record which is compressed 
using snappy. On receiving the record, the broker will try to de-compress the 
record to perform record validation and
+it will https://github.com/apache/kafka/blob/c97b88d5db4de28d9f51bb11fb71ddd6217c7dda/clients/src/main/java/org/apache/kafka/common/compress/SnappyFactory.java#L44;>delegate
 decompression to snappy-java library.
+The vulnerability in the snappy-java library may cause allocation 
of an unexpected amount of heap memory, causing an OOM on the broker. Any 
configured quota will not be able to prevent this because a single record can 
exploit this vulnerability.
+  
+
+
+  Advice
+   We advise all Kafka users to promptly upgrade to the latest 
version of snappy-java (1.1.10.1) to mitigate this vulnerability.

Review Comment:
   Thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] divijvaidya commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


divijvaidya commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252929336


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited
+by sending a malicious payload in the record which is compressed 
using snappy. On receiving the record, the broker will try to de-compress the 
record to perform record validation and
+it will https://github.com/apache/kafka/blob/c97b88d5db4de28d9f51bb11fb71ddd6217c7dda/clients/src/main/java/org/apache/kafka/common/compress/SnappyFactory.java#L44;>delegate
 decompression to snappy-java library.
+The vulnerability in the snappy-java library may cause allocation 
of an unexpected amount of heap memory, causing an OOM on the broker. Any 
configured quota will not be able to prevent this because a single record can 
exploit this vulnerability.
+  
+
+
+  Advice
+   We advise all Kafka users to promptly upgrade to the latest 
version of snappy-java (1.1.10.1) to mitigate this vulnerability.

Review Comment:
   good idea. Fixed in latest commit and rephrased to:
   ```
   to promptly upgrade to a version of snappy-java (>=1.1.10.1) to
   ```
   and 
   ```
   The latest version (1.1.10.1, as of July 5, 2023) of snappy-java is backward 
compatible
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] divijvaidya commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


divijvaidya commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252919917


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited

Review Comment:
   Fixed in latest commit



##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited
+by sending a malicious payload in the record which is compressed 
using snappy. On receiving the record, the broker will try to de-compress the 
record to perform record validation and
+it will https://github.com/apache/kafka/blob/c97b88d5db4de28d9f51bb11fb71ddd6217c7dda/clients/src/main/java/org/apache/kafka/common/compress/SnappyFactory.java#L44;>delegate
 decompression to snappy-java library.
+The vulnerability in the snappy-java library may cause allocation 
of an unexpected amount of heap memory, causing an OOM on the broker. Any 
configured quota will not be able to prevent this because a single record can 
exploit this vulnerability.
+  
+
+
+  Advice
+   We advise all Kafka users to promptly upgrade to the latest 
version of snappy-java (1.1.10.1) to mitigate this vulnerability.

Review Comment:
   Fixed in latest commit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] mimaison commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


mimaison commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252917298


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited
+by sending a malicious payload in the record which is compressed 
using snappy. On receiving the record, the broker will try to de-compress the 
record to perform record validation and
+it will https://github.com/apache/kafka/blob/c97b88d5db4de28d9f51bb11fb71ddd6217c7dda/clients/src/main/java/org/apache/kafka/common/compress/SnappyFactory.java#L44;>delegate
 decompression to snappy-java library.
+The vulnerability in the snappy-java library may cause allocation 
of an unexpected amount of heap memory, causing an OOM on the broker. Any 
configured quota will not be able to prevent this because a single record can 
exploit this vulnerability.
+  
+
+
+  Advice
+   We advise all Kafka users to promptly upgrade to the latest 
version of snappy-java (1.1.10.1) to mitigate this vulnerability.

Review Comment:
   Rather than `the latest version` which will be incorrect very soon, can we 
just say something `upgrade to snappy-java >= 1.1.10.1`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] fvaleri commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


fvaleri commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252910476


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited
+by sending a malicious payload in the record which is compressed 
using snappy. On receiving the record, the broker will try to de-compress the 
record to perform record validation and
+it will https://github.com/apache/kafka/blob/c97b88d5db4de28d9f51bb11fb71ddd6217c7dda/clients/src/main/java/org/apache/kafka/common/compress/SnappyFactory.java#L44;>delegate
 decompression to snappy-java library.
+The vulnerability in the snappy-java library may cause allocation 
of an unexpected amount of heap memory, causing an OOM on the broker. Any 
configured quota will not be able to prevent this because a single record can 
exploit this vulnerability.
+  
+
+
+  Advice
+   We advise all Kafka users to promptly upgrade to the latest 
version of snappy-java (1.1.10.1) to mitigate this vulnerability.

Review Comment:
   ```suggestion
 We advise all Kafka users to promptly upgrade to the latest 
version of snappy-java (1.1.10.1) to mitigate this vulnerability.
   ```



##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause out of memory error on brokers
+
+   This CVE identifies a vulnerability in snappy-java which could be 
used to cause an Out-of-Memory (OOM) condition, leading to 
Denial-of-Service(DoS) on the Kafka broker.
+  The vulnerability allows any user who can producer data to the 
broker to exploit the vulnerability by sending a malicious payload in the 
record which is compressed using snappy. For more details on the vulnerability, 
please refer to the following
+  link: https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh;>snappy-java
 GitHub advisory.
+  
+
+  
+
+
+  Versions affected
+  0.8.0 - 3.5.0
+
+
+  Fixed versions
+  3.5.1 (in-progress, https://lists.apache.org/thread/fkqy14bx8dc2ffrtvxyrg5f9fobjd2fd;>tentative
 release end of July 2023)
+
+
+  Impact
+   This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited

Review Comment:
   Extra space at the start.
   ```suggestion
 This vulnerability allows any user who can produce data to the 
broker to exploit the vulnerability, potentially causing an Out-of-Memory (OOM) 
condition, leading to Denial-of-Service(DoS) on the Kafka broker. It could be 
exploited
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] divijvaidya commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


divijvaidya commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252893916


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause OutOfMemoryError on brokers

Review Comment:
   updated in latest commit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] showuon commented on a diff in pull request #531: Add CVE-2023-34455 to cve-list

2023-07-05 Thread via GitHub


showuon commented on code in PR #531:
URL: https://github.com/apache/kafka-site/pull/531#discussion_r1252870499


##
cve-list.html:
##
@@ -9,6 +9,44 @@ Apache Kafka Security Vulnerabilities
 
 This page lists all security vulnerabilities fixed in released versions of 
Apache Kafka.
 
+  https://nvd.nist.gov/vuln/detail/CVE-2023-34455;>CVE-2023-34455 
Clients using Snappy compression may cause OutOfMemoryError on brokers

Review Comment:
   What I see in the doc output is `OUTOFMEMORYERROR`, which is hard to read. 
Could we put them in `out of memory error`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[VOTE] KIP-944 Support async runtimes in consumer

2023-07-05 Thread Erik van Oosten

Hello all,

I'd like to call a vote on KIP-944 Support async runtimes in consumer. 
It has has been 'under discussion' for 7 days now. 'Under discussion' 
between quotes, because there were 0 comments so far. I hope the KIP is 
clear!


KIP description: https://cwiki.apache.org/confluence/x/chw0Dw

Kind regards,
    Erik.