[jira] [Commented] (KAFKA-16414) Inconsistent active segment expiration behavior between retention.ms and retention.bytes

2024-05-18 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847628#comment-17847628
 ] 

Luke Chen commented on KAFKA-16414:
---

So, I think we have the consensus that if we can include active segment for 
retention.bytes as well as making tiered storage integration tests non-flaky, 
then it is good to make this change. But from [~ckamal] 's opinion, it's not 
easy to achieve that. Maybe we can give it a quick try and see if the time 
investment is worth or not. That is, the current behavior has been there for a 
long time, I think even if we don't change it, users seem to accept it. So if 
you need much time to make the tiered storage integration test reliable, it 
might not worth doing it. WDYT [~brandboat] ? Tha

> Inconsistent active segment expiration behavior between retention.ms and 
> retention.bytes
> 
>
> Key: KAFKA-16414
> URL: https://issues.apache.org/jira/browse/KAFKA-16414
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.6.1
>Reporter: Kuan Po Tseng
>Assignee: Kuan Po Tseng
>Priority: Major
>
> This is a follow up issue on KAFKA-16385.
> Currently, there's a difference between how retention.ms and retention.bytes 
> handle active segment expiration:
> - retention.ms always expire active segment when max segment timestamp 
> matches the condition.
> - retention.bytes only expire active segment when retention.bytes is 
> configured to zero.
> The behavior should be either rotate active segments for both retention 
> configurations or none at all.
> For more details, see
> https://issues.apache.org/jira/browse/KAFKA-16385?focusedCommentId=17829682&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17829682



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] KAFKA-16784: Migrate TopicBasedRemoteLogMetadataManagerMultipleSubscriptionsTest to use ClusterTestExtensions [kafka]

2024-05-18 Thread via GitHub


FrankYang0529 opened a new pull request, #15992:
URL: https://github.com/apache/kafka/pull/15992

   * Replace `TopicBasedRemoteLogMetadataManagerHarness` with 
`RemoteLogMetadataManagerTestUtils#builder` in 
`TopicBasedRemoteLogMetadataManagerMultipleSubscriptionsTest`.
   * Use `ClusterTestExtention` for 
`TopicBasedRemoteLogMetadataManagerMultipleSubscriptionsTest`.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16414) Inconsistent active segment expiration behavior between retention.ms and retention.bytes

2024-05-18 Thread Kuan Po Tseng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847617#comment-17847617
 ] 

Kuan Po Tseng commented on KAFKA-16414:
---

c.c. [~showuon] , any thoughts on this one ? Would be appreciate if you can 
give any feedback. Many thanks :)

> Inconsistent active segment expiration behavior between retention.ms and 
> retention.bytes
> 
>
> Key: KAFKA-16414
> URL: https://issues.apache.org/jira/browse/KAFKA-16414
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.6.1
>Reporter: Kuan Po Tseng
>Assignee: Kuan Po Tseng
>Priority: Major
>
> This is a follow up issue on KAFKA-16385.
> Currently, there's a difference between how retention.ms and retention.bytes 
> handle active segment expiration:
> - retention.ms always expire active segment when max segment timestamp 
> matches the condition.
> - retention.bytes only expire active segment when retention.bytes is 
> configured to zero.
> The behavior should be either rotate active segments for both retention 
> configurations or none at all.
> For more details, see
> https://issues.apache.org/jira/browse/KAFKA-16385?focusedCommentId=17829682&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17829682



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16414) Inconsistent active segment expiration behavior between retention.ms and retention.bytes

2024-05-18 Thread Kuan Po Tseng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847616#comment-17847616
 ] 

Kuan Po Tseng commented on KAFKA-16414:
---

I'm still waiting for other Kafka PMC members/committers to agree on this. So 
far, it seems that only [~chia7712]  give a +1 on this.

> Inconsistent active segment expiration behavior between retention.ms and 
> retention.bytes
> 
>
> Key: KAFKA-16414
> URL: https://issues.apache.org/jira/browse/KAFKA-16414
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.6.1
>Reporter: Kuan Po Tseng
>Assignee: Kuan Po Tseng
>Priority: Major
>
> This is a follow up issue on KAFKA-16385.
> Currently, there's a difference between how retention.ms and retention.bytes 
> handle active segment expiration:
> - retention.ms always expire active segment when max segment timestamp 
> matches the condition.
> - retention.bytes only expire active segment when retention.bytes is 
> configured to zero.
> The behavior should be either rotate active segments for both retention 
> configurations or none at all.
> For more details, see
> https://issues.apache.org/jira/browse/KAFKA-16385?focusedCommentId=17829682&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17829682



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: fix incorrect formatter package in streams quickstart [kafka]

2024-05-18 Thread via GitHub


brandboat commented on code in PR #15991:
URL: https://github.com/apache/kafka/pull/15991#discussion_r1605934347


##
docs/streams/quickstart.html:
##
@@ -200,7 +200,7 @@ Step 4: St
 > 
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
 --topic streams-wordcount-output \
 --from-beginning \
---formatter kafka.tools.DefaultMessageFormatter \
+--formatter org.apache.kafka.tools.consumer.DefaultMessageFormatter \

Review Comment:
   I agree with this. Many scripts could be affected by this breaking change. 
We should implement a deprecation cycle.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15307: Update/errors for deprecated config [kafka]

2024-05-18 Thread via GitHub


mjsax commented on code in PR #14448:
URL: https://github.com/apache/kafka/pull/14448#discussion_r1605892956


##
docs/streams/developer-guide/config-streams.html:
##
@@ -261,10 +261,10 @@ num.standby.replicasstatestore.cache.max.bytes
 Medium
-Maximum number of memory bytes to be used for 
record caches across all threads.
+Maximum number of memory bytes to be used for 
record caches across all threads. Note that at the debug level you can use 
cache.size to monitor the actual size of the Kafka Streams 
cache.
 10485760
   
-  cache.max.bytes.buffering (Deprecated. Use 
statestore.cache.max.bytes instead.)
+  cache.max.bytes.buffering (Deprecated. Use 
cache.max.bytes instead.)

Review Comment:
   Seem `statestore.cache.max.bytes` is correct?



##
streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java:
##
@@ -658,7 +660,7 @@ public class StreamsConfig extends AbstractConfig {
 public static final String METRICS_SAMPLE_WINDOW_MS_CONFIG = 
CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_CONFIG;
 
 /** {@code auto.include.jmx.reporter}
- * @deprecated and will be removed in 4.0.0 */
+ * @deprecated and will removed in 4.0.0 Users should instead include 
org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to 
enable the JmxReporter.} */

Review Comment:
   ```suggestion
* @deprecated and be will removed in 4.0.0; include {@link 
org.apache.kafka.common.metrics.JmxReporter} via config {@code 
metric.reporters} in order to enable the JmxReporter. */
   ```



##
docs/streams/developer-guide/config-streams.html:
##
@@ -307,25 +307,23 @@ num.standby.replicasnull
   
-  default.windowed.key.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)
-Medium
-Default serializer/deserializer for the inner 
class of windowed keys, implementing the Serde interface.
-null
-  
-  default.windowed.value.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)
+  default.windowed.key.serde.inner 
(Deprecated.)

Review Comment:
   Seems we don't need this line? It's duplicated above and below?



##
streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java:
##
@@ -500,7 +500,9 @@ public class StreamsConfig extends AbstractConfig {
 private static final String BUILT_IN_METRICS_VERSION_DOC = "Version of the 
built-in metrics to use.";
 
 /** {@code cache.max.bytes.buffering}
+

Review Comment:
   nit: about unnecessary empty lines (same right below)



##
docs/streams/developer-guide/config-streams.html:
##
@@ -307,25 +307,23 @@ num.standby.replicasnull
   
-  default.windowed.key.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)

Review Comment:
   Why does this PR remove this row in the table?



##
docs/streams/developer-guide/config-streams.html:
##
@@ -261,10 +261,10 @@ num.standby.replicasstatestore.cache.max.bytes
 Medium
-Maximum number of memory bytes to be used for 
record caches across all threads.
+Maximum number of memory bytes to be used for 
record caches across all threads. Note that at the debug level you can use 
cache.size to monitor the actual size of the Kafka Streams 
cache.

Review Comment:
   Not clear to me, what the added sentence means? Are you referring to JMX 
metrics?



##
docs/streams/developer-guide/config-streams.html:
##
@@ -307,25 +307,23 @@ num.standby.replicasnull
   
-  default.windowed.key.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)
-Medium
-Default serializer/deserializer for the inner 
class of windowed keys, implementing the Serde interface.
-null
-  
-  default.windowed.value.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)
+  default.windowed.key.serde.inner 
(Deprecated.)
+

Review Comment:
   nit: remove empty line



##
docs/streams/developer-guide/config-streams.html:
##
@@ -307,25 +307,23 @@ num.standby.replicasnull
   
-  default.windowed.key.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)
-Medium
-Default serializer/deserializer for the inner 
class of windowed keys, implementing the Serde interface.
-null
-  
-  default.windowed.value.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)
+  default.windowed.key.serde.inner 
(Deprecated.)
+
+  default.windowed.key.serde.inner 
(Deprecated. Use windowed.inner.class.serde instead.)

Review Comment:
   duplicated line?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the speci

Re: [PR] MINOR: Fix warnings in streams javadoc [kafka]

2024-05-18 Thread via GitHub


mjsax commented on PR #15955:
URL: https://github.com/apache/kafka/pull/15955#issuecomment-2118998403

   Thanks for the cleanup! Highly appreciated!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16625: Reverse lookup map from topic partitions to members [kafka]

2024-05-18 Thread via GitHub


rreddy-22 commented on code in PR #15974:
URL: https://github.com/apache/kafka/pull/15974#discussion_r1605859034


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/OptimizedUniformAssignmentBuilder.java:
##
@@ -59,7 +59,7 @@ public class OptimizedUniformAssignmentBuilder extends 
AbstractUniformAssignment
 /**
  * The assignment specification which includes member metadata.
  */
-private final AssignmentSpec assignmentSpec;
+private final GroupSpecImpl groupSpec;

Review Comment:
   thanks for the catch, I completely missed this!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16789: Fix thread leak detection for event handler threads [kafka]

2024-05-18 Thread via GitHub


chia7712 commented on PR #15984:
URL: https://github.com/apache/kafka/pull/15984#issuecomment-2118911871

   I will merge this PR after QA gets work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16790) Calls to RemoteLogManager are made before it is configured

2024-05-18 Thread Muralidhar Basani (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847584#comment-17847584
 ] 

Muralidhar Basani commented on KAFKA-16790:
---

After abit further digging

-> remoteLogManagerOpt = createRemoteLogManager() in BrokerServer.scala 
[here|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/BrokerServer.scala#L207]
 is invoked before brokerMetadataPublisher is initialized

However, if the property REMOTE_LOG_STORAGE_SYSTEM_ENABLE_PROP = 
"remote.log.storage.system.enable" is set to true and few other related 
properties, remoteLogManager is configured properly, else it is None.

So, imo we don't have to fix any.

> Calls to RemoteLogManager are made before it is configured
> --
>
> Key: KAFKA-16790
> URL: https://issues.apache.org/jira/browse/KAFKA-16790
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.8.0
>Reporter: Christo Lolov
>Priority: Major
>
> BrokerMetadataPublisher#onMetadataUpdate calls ReplicaManager#applyDelta (1) 
> which in turn calls RemoteLogManager#onLeadershipChange (2), however, the 
> RemoteLogManager is configured after the BrokerMetadataPublisher starts 
> running (3, 4). This is incorrect, we either need to initialise the 
> RemoteLogManager before we start the BrokerMetadataPublisher or we need to 
> skip calls to onLeadershipChange if the RemoteLogManager is not initialised.
> (1) 
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/metadata/BrokerMetadataPublisher.scala#L151]
> (2) 
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ReplicaManager.scala#L2737]
> (3) 
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/BrokerServer.scala#L432]
> (4) 
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/BrokerServer.scala#L515]
> The way to reproduce the problem is by looking at the following changes
> {code:java}
>  config/kraft/broker.properties                         | 10 ++
>  .../main/java/kafka/log/remote/RemoteLogManager.java   |  8 +++-
>  core/src/main/scala/kafka/server/ReplicaManager.scala  |  6 +-
>  3 files changed, 22 insertions(+), 2 deletions(-)diff --git 
> a/config/kraft/broker.properties b/config/kraft/broker.properties
> index 2d15997f28..39d126cf87 100644
> --- a/config/kraft/broker.properties
> +++ b/config/kraft/broker.properties
> @@ -127,3 +127,13 @@ log.segment.bytes=1073741824
>  # The interval at which log segments are checked to see if they can be 
> deleted according
>  # to the retention policies
>  log.retention.check.interval.ms=30
> +
> +remote.log.storage.system.enable=true
> +remote.log.metadata.manager.listener.name=PLAINTEXT
> +remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
> +remote.log.storage.manager.class.path=/home/ec2-user/kafka/storage/build/libs/kafka-storage-3.8.0-SNAPSHOT-test.jar
> +remote.log.storage.manager.impl.prefix=rsm.config.
> +remote.log.metadata.manager.impl.prefix=rlmm.config.
> +rsm.config.dir=/tmp/kafka-remote-storage
> +rlmm.config.remote.log.metadata.topic.replication.factor=1
> +log.retention.check.interval.ms=1000
> diff --git a/core/src/main/java/kafka/log/remote/RemoteLogManager.java 
> b/core/src/main/java/kafka/log/remote/RemoteLogManager.java
> index 6555b7c0cd..e84a072abc 100644
> --- a/core/src/main/java/kafka/log/remote/RemoteLogManager.java
> +++ b/core/src/main/java/kafka/log/remote/RemoteLogManager.java
> @@ -164,6 +164,7 @@ public class RemoteLogManager implements Closeable {
>      // The endpoint for remote log metadata manager to connect to
>      private Optional endpoint = Optional.empty();
>      private boolean closed = false;
> +    private boolean up = false;
>  
>      /**
>       * Creates RemoteLogManager instance with the given arguments.
> @@ -298,6 +299,7 @@ public class RemoteLogManager implements Closeable {
>          // in connecting to the brokers or remote storages.
>          configureRSM();
>          configureRLMM();
> +        up = true;
>      }
>  
>      public RemoteStorageManager storageManager() {
> @@ -329,7 +331,11 @@ public class RemoteLogManager implements Closeable {
>      public void onLeadershipChange(Set partitionsBecomeLeader,
>                                     Set partitionsBecomeFollower,
>                                     Map topicIds) {
> -        LOGGER.debug("Received leadership changes for leaders: {} and 
> followers: {}", partitionsBecomeLeader, partitionsBecomeFollower);
> +        if (!up) {
> +            LOGGER.error("NullPointerException");
> +            return;
> +        }
> +        LOGGER.error("Received leadership changes for leaders: {}

Re: [PR] MINOR: fix incorrect formatter package in streams quickstart [kafka]

2024-05-18 Thread via GitHub


chia7712 commented on code in PR #15991:
URL: https://github.com/apache/kafka/pull/15991#discussion_r1605853497


##
docs/streams/quickstart.html:
##
@@ -200,7 +200,7 @@ Step 4: St
 > 
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
 --topic streams-wordcount-output \
 --from-beginning \
---formatter kafka.tools.DefaultMessageFormatter \
+--formatter org.apache.kafka.tools.consumer.DefaultMessageFormatter \

Review Comment:
   It seems to me this is a kind of broken compatibility. We should allow user 
to keep using `kafka.tools.NoOpMessageFormatter`, 
`kafka.tools.DefaultMessageFormatter`, and 
`kafka.tools.LoggingMessageFormatter`, but we also display warning messages to 
say those "strings" will be removed in 4.0
   
   @mimaison WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-16791) Add thread detection to ClusterTestExtensions

2024-05-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16791:
--

Assignee: bboyleonp  (was: Chia-Ping Tsai)

> Add thread detection to ClusterTestExtensions
> -
>
> Key: KAFKA-16791
> URL: https://issues.apache.org/jira/browse/KAFKA-16791
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: bboyleonp
>Priority: Minor
>
> `ClusterTestExtensions` should implement `BeforeAllCallback` and 
> `AfterAllCallback` by `TestUtils.verifyNoUnexpectedThreads`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15242) FixedKeyProcessor testing is unusable

2024-05-18 Thread Matej Sprysl (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847562#comment-17847562
 ] 

Matej Sprysl commented on KAFKA-15242:
--

Since `FixedKeyProcessor` and `FixedKeyRecord`, and even `TestRecord` are 
standalone classes/interfaces, `TestFixedKeyRecord` needs to be implemented as 
well.

 

Therefore, issue KAFKA-15143 is a subset of this one, because it has no mention 
of `TestFixedKeyRecord`. The issue should be renamed to cover this one.

> FixedKeyProcessor testing is unusable
> -
>
> Key: KAFKA-15242
> URL: https://issues.apache.org/jira/browse/KAFKA-15242
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Zlstibor Veljkovic
>Assignee: Alexander Aghili
>Priority: Major
>
> Using mock processor context to get the forwarded message doesn't work.
> Also there is not a well documented way for testing FixedKeyProcessors.
> Please see the repo at [https://github.com/zveljkovic/kafka-repro]
> but most important piece is test file with runtime and compile time errors:
> [https://github.com/zveljkovic/kafka-repro/blob/master/src/test/java/com/example/demo/MyFixedKeyProcessorTest.java]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15242) FixedKeyProcessor testing is unusable

2024-05-18 Thread Matej Sprysl (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847562#comment-17847562
 ] 

Matej Sprysl edited comment on KAFKA-15242 at 5/18/24 4:15 PM:
---

Since `FixedKeyProcessor` and `FixedKeyRecord`, and even `TestRecord` are 
standalone classes/interfaces, `TestFixedKeyRecord` needs to be implemented as 
well.

Therefore, issue KAFKA-15143 is a subset of this one, because it has no mention 
of `TestFixedKeyRecord`. The issue should be renamed to cover this one.


was (Author: JIRAUSER304784):
Since `FixedKeyProcessor` and `FixedKeyRecord`, and even `TestRecord` are 
standalone classes/interfaces, `TestFixedKeyRecord` needs to be implemented as 
well.

 

Therefore, issue KAFKA-15143 is a subset of this one, because it has no mention 
of `TestFixedKeyRecord`. The issue should be renamed to cover this one.

> FixedKeyProcessor testing is unusable
> -
>
> Key: KAFKA-15242
> URL: https://issues.apache.org/jira/browse/KAFKA-15242
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Zlstibor Veljkovic
>Assignee: Alexander Aghili
>Priority: Major
>
> Using mock processor context to get the forwarded message doesn't work.
> Also there is not a well documented way for testing FixedKeyProcessors.
> Please see the repo at [https://github.com/zveljkovic/kafka-repro]
> but most important piece is test file with runtime and compile time errors:
> [https://github.com/zveljkovic/kafka-repro/blob/master/src/test/java/com/example/demo/MyFixedKeyProcessorTest.java]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] MINOR: fix incorrect formatter package in streams quickstart [kafka]

2024-05-18 Thread via GitHub


brandboat opened a new pull request, #15991:
URL: https://github.com/apache/kafka/pull/15991

   
https://github.com/apache/kafka/commit/0bf830fc9c3915bc99b6e487e6083dabd593c5d3 
moved DefaultMessageFormatter package from `kafka.tools` to 
`org.apache.kafka.tools.consumer`
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-16786) New consumer subscribe should not require the deprecated partition.assignment.strategy

2024-05-18 Thread Phuc Hong Tran (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phuc Hong Tran reassigned KAFKA-16786:
--

Assignee: Phuc Hong Tran

> New consumer subscribe should not require the deprecated 
> partition.assignment.strategy
> --
>
> Key: KAFKA-16786
> URL: https://issues.apache.org/jira/browse/KAFKA-16786
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 3.7.0
>Reporter: Lianet Magrans
>Assignee: Phuc Hong Tran
>Priority: Major
>  Labels: kip-848-client-support
> Fix For: 3.8
>
>
> The partition.assignment.strategy config is deprecated with the new consumer 
> group protocol KIP-848. With the new protocol, server side assignors are 
> supported for now, defined in the property
> group.remote.assignor, and with default values selected by the broker, so 
> it's not even a required property. 
> The new AsyncKafkaConsumer supports the new protocol only, but it currently 
> throws an IllegalStateException if a call to subscribe is made and the 
> deprecated config partition.assignment.strategy is empty (see 
> [throwIfNoAssignorsConfigured|https://github.com/apache/kafka/blob/056d232f4e28bf8e67e00f8ed2c103fdb0f3b78e/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L1715]).
>  
> We should remove the reference to ConsumerPartitionAssignor in the 
> AsyncKafkaConsumer, along with it's validation for non-empty on subscribe 
> (only use it has)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16429: Enhance all configs which can trigger rolling of new segment [kafka]

2024-05-18 Thread via GitHub


brandboat commented on PR #15990:
URL: https://github.com/apache/kafka/pull/15990#issuecomment-2118751316

   Apologies for the late pr, gentle ping @showuon , @chia7712 , @jeqo . Could 
you take a look when you are available ? Many thanks 😃 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16429: Enhance all configs which can trigger rolling of new segment [kafka]

2024-05-18 Thread via GitHub


brandboat opened a new pull request, #15990:
URL: https://github.com/apache/kafka/pull/15990

   related to https://issues.apache.org/jira/browse/KAFKA-16429.
   
   Update the document about broker/topic side retention and segment 
configurations.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16574: The metrics of LogCleaner disappear after reconfiguration [kafka]

2024-05-18 Thread via GitHub


chiacyu commented on code in PR #15863:
URL: https://github.com/apache/kafka/pull/15863#discussion_r1605700828


##
core/src/main/scala/kafka/log/LogCleaner.scala:
##
@@ -182,6 +183,27 @@ class LogCleaner(initialConfig: CleanerConfig,
 cleanerManager.removeMetrics()

Review Comment:
   Oh, I got your point now. That make sense to me, will apply the change, 
please take a look. Thanks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16223: Replace EasyMock/PowerMock with Mockito for KafkaConfigBackingStoreTest [kafka]

2024-05-18 Thread via GitHub


chiacyu opened a new pull request, #15989:
URL: https://github.com/apache/kafka/pull/15989

   This pr cleans the powermock and easymock in KafkaConfigBackingStoreTest and 
I also write the corresponding tests with Mockito. Thanks.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org