Re: [PR] KAFKA-16954: fix consumer close to release assignment in background [kafka]

2024-06-17 Thread via GitHub


lucasbru commented on PR #16376:
URL: https://github.com/apache/kafka/pull/16376#issuecomment-2175272676

   @jlprat Fix for blocker KAFKA-16954 merged to 3.8 branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16954: fix consumer close to release assignment in background [kafka]

2024-06-17 Thread via GitHub


lucasbru merged PR #16376:
URL: https://github.com/apache/kafka/pull/16376


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] ConfigCommand should not allow to define both `entity-default` and `entity-name` [kafka]

2024-06-17 Thread via GitHub


m1a2st opened a new pull request, #16381:
URL: https://github.com/apache/kafka/pull/16381

   https://issues.apache.org/jira/browse/KAFKA-16959
   add new check and test fot can't allow both parameter
   
   waiting for https://github.com/apache/kafka/pull/16317
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] use the dsl store supplier for fkj subscriptions [kafka]

2024-06-17 Thread via GitHub


rodesai opened a new pull request, #16380:
URL: https://github.com/apache/kafka/pull/16380

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16480) ListOffsets change should have an associated API/IBP version update

2024-06-17 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat updated KAFKA-16480:
---
Fix Version/s: 3.8.0

> ListOffsets change should have an associated API/IBP version update
> ---
>
> Key: KAFKA-16480
> URL: https://issues.apache.org/jira/browse/KAFKA-16480
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.8.0
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Blocker
> Fix For: 3.8.0
>
>
> https://issues.apache.org/jira/browse/KAFKA-16154 introduced the changes to 
> the ListOffsets API to accept latest-tiered-timestamp and return the 
> corresponding offset.
> Those changes should have a) increased the version of the ListOffsets API b) 
> increased the inter-broker protocol version c) hidden the latest version of 
> the ListOffsets behind the latestVersionUnstable flag
> The purpose of this task is to remedy said miss



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16480) ListOffsets change should have an associated API/IBP version update

2024-06-17 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat updated KAFKA-16480:
---
Affects Version/s: 3.8.0

> ListOffsets change should have an associated API/IBP version update
> ---
>
> Key: KAFKA-16480
> URL: https://issues.apache.org/jira/browse/KAFKA-16480
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.8.0
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Blocker
>
> https://issues.apache.org/jira/browse/KAFKA-16154 introduced the changes to 
> the ListOffsets API to accept latest-tiered-timestamp and return the 
> corresponding offset.
> Those changes should have a) increased the version of the ListOffsets API b) 
> increased the inter-broker protocol version c) hidden the latest version of 
> the ListOffsets behind the latestVersionUnstable flag
> The purpose of this task is to remedy said miss



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-17 Thread via GitHub


jlprat commented on PR #16347:
URL: https://github.com/apache/kafka/pull/16347#issuecomment-2175231482

   hi @junrao I marked 
https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-16480 as a blocker 
for 3.8


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16480) ListOffsets change should have an associated API/IBP version update

2024-06-17 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat updated KAFKA-16480:
---
Priority: Blocker  (was: Major)

> ListOffsets change should have an associated API/IBP version update
> ---
>
> Key: KAFKA-16480
> URL: https://issues.apache.org/jira/browse/KAFKA-16480
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Blocker
>
> https://issues.apache.org/jira/browse/KAFKA-16154 introduced the changes to 
> the ListOffsets API to accept latest-tiered-timestamp and return the 
> corresponding offset.
> Those changes should have a) increased the version of the ListOffsets API b) 
> increased the inter-broker protocol version c) hidden the latest version of 
> the ListOffsets behind the latestVersionUnstable flag
> The purpose of this task is to remedy said miss



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16988: add 1 more node for test_exactly_once_source system test [kafka]

2024-06-17 Thread via GitHub


showuon commented on PR #16379:
URL: https://github.com/apache/kafka/pull/16379#issuecomment-2175200173

   @soarez, this is found while running system test in v3.7.1 RC build. We 
should add this into RC2.
   cc  @jlprat , this also impacts v3.8.0. We should back port this fix into 
v3.8 branch.
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16988: add 1 more node for test_exactly_once_source system test [kafka]

2024-06-17 Thread via GitHub


showuon opened a new pull request, #16379:
URL: https://github.com/apache/kafka/pull/16379

   `ducktape.cluster.node_container.InsufficientResourcesError: linux nodes 
requested: 1. linux nodes available: 0` thrown when running 
`ConnectDistributedTest#test_exactly_once_source` system test. Adding 1 more 
node for test_exactly_once_source system test.
   
   Confirmed after applying this change, the failed test passed without error.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16982) Docker Official Image Build and Test

2024-06-17 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16982:
---
Description: 
Provide the image type and kafka version to {{Docker Official Image Build 
Test}} workflow. It will generate a test report and CVE report that can be 
shared with the community, if need be,
{code:java}
image_type: jvm
kafka_version: 3.7.0{code}
 

  was:
Provide the image type and kafka version to {{Docker Official Image Build 
Test}} workflow. It will generate a test report and CVE report that can be 
shared with the community, if need be,
image_type: jvm
kafka_version: 3.7.0

 


> Docker Official Image Build and Test
> 
>
> Key: KAFKA-16982
> URL: https://issues.apache.org/jira/browse/KAFKA-16982
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> Provide the image type and kafka version to {{Docker Official Image Build 
> Test}} workflow. It will generate a test report and CVE report that can be 
> shared with the community, if need be,
> {code:java}
> image_type: jvm
> kafka_version: 3.7.0{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16983) Generate the PR for Docker Official Images repo

2024-06-17 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16983:
---
Description: 
# Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 

{code:java}
python generate_kafka_pr_template.py --image-type=jvm{code}

 
      2. Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies              the exisiting entry.

  was:
Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 
python generate_kafka_pr_template.py --image-type=jvm

 
Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies the exisiting entry.


> Generate the PR for Docker Official Images repo
> ---
>
> Key: KAFKA-16983
> URL: https://issues.apache.org/jira/browse/KAFKA-16983
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> # Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
> providing it the image type. Update the existing entry. 
> {code:java}
> python generate_kafka_pr_template.py --image-type=jvm{code}
>  
>       2. Copy this to raise a new PR in [Docker Hub's Docker Official 
> Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
>  , which modifies              the exisiting entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16983) Generate the PR for Docker Official Images repo

2024-06-17 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16983:
---
Description: 
# Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 

{code:java}
python generate_kafka_pr_template.py --image-type=jvm{code}
 
      2. Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies the exisiting entry.

  was:
# Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
providing it the image type. Update the existing entry. 

{code:java}
python generate_kafka_pr_template.py --image-type=jvm{code}

 
      2. Copy this to raise a new PR in [Docker Hub's Docker Official 
Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
 , which modifies              the exisiting entry.


> Generate the PR for Docker Official Images repo
> ---
>
> Key: KAFKA-16983
> URL: https://issues.apache.org/jira/browse/KAFKA-16983
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> # Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
> providing it the image type. Update the existing entry. 
> {code:java}
> python generate_kafka_pr_template.py --image-type=jvm{code}
>  
>       2. Copy this to raise a new PR in [Docker Hub's Docker Official 
> Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
>  , which modifies the exisiting entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16980) Extract docker official image artifact

2024-06-17 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16980:
---
Description: 
# Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.

{code:java}
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact{code}
       *2. Commit these changes to AK trunk.*

 

  was:
# Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.

{code:java}
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact{code}

 # *Commit these changes to AK trunk.*

 


> Extract docker official image artifact
> --
>
> Key: KAFKA-16980
> URL: https://issues.apache.org/jira/browse/KAFKA-16980
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> # Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
> providing it the path to the downloaded artifact. Ensure that this creates a 
> new directory under {{{}docker/docker_official_images/kafka_version{}}}.
> {code:java}
> python extract_docker_official_image_artifact.py 
> --path_to_downloaded_artifact=path/to/downloaded/artifact{code}
>        *2. Commit these changes to AK trunk.*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16979) Prepare Docker Official Image Source for 3.7.0 release

2024-06-17 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16979:
---
Description: 
Provide the image type and kafka version to {{Docker Prepare Docker Official 
Image Source}} workflow. It will generate a artifact containing the static 
Dockerfile and assets for that specific version. Download the same from the 
workflow.
{code:java}
image_type: jvm
kafka_version: 3.7.0{code}
 

  was:
Provide the image type and kafka version to {{Docker Prepare Docker Official 
Image Source}} workflow. It will generate a artifact containing the static 
Dockerfile and assets for that specific version. Download the same from the 
workflow.
image_type: jvm
kafka_version: 3.7.0

 


> Prepare Docker Official Image Source for 3.7.0 release
> --
>
> Key: KAFKA-16979
> URL: https://issues.apache.org/jira/browse/KAFKA-16979
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> Provide the image type and kafka version to {{Docker Prepare Docker Official 
> Image Source}} workflow. It will generate a artifact containing the static 
> Dockerfile and assets for that specific version. Download the same from the 
> workflow.
> {code:java}
> image_type: jvm
> kafka_version: 3.7.0{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16980) Extract docker official image artifact

2024-06-17 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora updated KAFKA-16980:
---
Description: 
# Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.

{code:java}
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact{code}

 # *Commit these changes to AK trunk.*

 

  was:
Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
providing it the path to the downloaded artifact. Ensure that this creates a 
new directory under {{{}docker/docker_official_images/kafka_version{}}}.
python extract_docker_official_image_artifact.py 
--path_to_downloaded_artifact=path/to/downloaded/artifact

 


> Extract docker official image artifact
> --
>
> Key: KAFKA-16980
> URL: https://issues.apache.org/jira/browse/KAFKA-16980
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> # Run the {{docker/extract_docker_official_image_artifact.py}} script, by 
> providing it the path to the downloaded artifact. Ensure that this creates a 
> new directory under {{{}docker/docker_official_images/kafka_version{}}}.
> {code:java}
> python extract_docker_official_image_artifact.py 
> --path_to_downloaded_artifact=path/to/downloaded/artifact{code}
>  # *Commit these changes to AK trunk.*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16977: Reapply dynamic remote configs after broker restart [kafka]

2024-06-17 Thread via GitHub


satishd commented on PR #16353:
URL: https://github.com/apache/kafka/pull/16353#issuecomment-2175033037

   Thanks @jlprat , merged to 3.8.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-5072[WIP]: Kafka topics should allow custom metadata configs within some config namespace [kafka]

2024-06-17 Thread via GitHub


wspragg commented on PR #2873:
URL: https://github.com/apache/kafka/pull/2873#issuecomment-2175031620

   Really important feature and been  waiting for way to long, can we get this 
released.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16803: Update ShadowJavaPlugin [kafka]

2024-06-17 Thread via GitHub


Nancy-ksolves commented on PR #16295:
URL: https://github.com/apache/kafka/pull/16295#issuecomment-2175002646

   So maybe we can consider this as a temporary solution and then think about 
removing that altogether later. Thoughts?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16977: Reapply dynamic remote configs after broker restart [kafka]

2024-06-17 Thread via GitHub


jlprat commented on PR #16353:
URL: https://github.com/apache/kafka/pull/16353#issuecomment-2174993552

   Yes, it can be ported to 3.8. Thanks @satishd 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16977: Reapply dynamic remote configs after broker restart [kafka]

2024-06-17 Thread via GitHub


satishd commented on PR #16353:
URL: https://github.com/apache/kafka/pull/16353#issuecomment-2174953555

   @jlprat This is a bug fix that should be pushed to 3.8. wdyt?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16977: Reapply dynamic remote configs after broker restart [kafka]

2024-06-17 Thread via GitHub


satishd merged PR #16353:
URL: https://github.com/apache/kafka/pull/16353


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16977: Reapply dynamic remote configs after broker restart [kafka]

2024-06-17 Thread via GitHub


satishd commented on PR #16353:
URL: https://github.com/apache/kafka/pull/16353#issuecomment-2174944290

   A few unrelated test failures, merging it to trunk for now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16958 add STRICT_STUBS to EndToEndLatencyTest, OffsetCommitCallbackInvokerTest, ProducerPerformanceTest, and TopologyTest [kafka]

2024-06-17 Thread via GitHub


dujian0068 commented on PR #16348:
URL: https://github.com/apache/kafka/pull/16348#issuecomment-2174890810

   @chia7712 
   already edited


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-16969) KRaft unable to upgrade to v3.7.1 and later when multiple log dir is set

2024-06-17 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-16969.
---
Fix Version/s: 3.8.0
   3.7.1
 Assignee: Igor Soarez
   Resolution: Fixed

> KRaft unable to upgrade to v3.7.1 and later when multiple log dir is set
> 
>
> Key: KAFKA-16969
> URL: https://issues.apache.org/jira/browse/KAFKA-16969
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.8.0, 3.7.1
>Reporter: Luke Chen
>Assignee: Igor Soarez
>Priority: Blocker
> Fix For: 3.8.0, 3.7.1
>
>
> After KAFKA-16606, we added validation metadata version for JBOD support. 
> This validation works well in isolated KRaft mode (i.e. separate 
> controller/broker node). But when in combined mode, this validation will let 
> the node fail to startup. The log will be like this:
> {code:java}
> [2024-06-15 16:00:45,621] INFO [BrokerServer id=1] Waiting for the broker 
> metadata publishers to be installed (kafka.server.BrokerServer)
> [2024-06-15 16:00:45,621] INFO [BrokerServer id=1] Finished waiting for the 
> broker metadata publishers to be installed (kafka.server.BrokerServer)
> [2024-06-15 16:00:45,621] INFO [BrokerServer id=1] Waiting for the controller 
> to acknowledge that we are caught up (kafka.server.BrokerServer)
> [2024-06-15 16:00:45,621] INFO [MetadataLoader id=1] InitializeNewPublishers: 
> initializing MetadataVersionPublisher(id=1) with a snapshot at offset 4 
> (org.apache.kafka.image.loader.MetadataLoader)
> [2024-06-15 16:00:45,621] ERROR Encountered metadata publishing fault: Broker 
> configuration does not support the cluster MetadataVersion 
> (org.apache.kafka.server.fault.LoggingFaultHandler)
> java.lang.IllegalArgumentException: requirement failed: Multiple log 
> directories (aka JBOD) are not supported in the current MetadataVersion 
> 3.6-IV2. Need 3.7-IV2 or higher
>     at scala.Predef$.require(Predef.scala:337)
>     at 
> kafka.server.KafkaConfig.validateWithMetadataVersion(KafkaConfig.scala:2545)
>     at 
> kafka.server.MetadataVersionConfigValidator.onMetadataVersionChanged(MetadataVersionConfigValidator.java:62)
>     at 
> kafka.server.MetadataVersionConfigValidator.onMetadataUpdate(MetadataVersionConfigValidator.java:55)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.initializeNewPublishers(MetadataLoader.java:309)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.lambda$scheduleInitializeNewPublishers$0(MetadataLoader.java:266)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>     at java.base/java.lang.Thread.run(Thread.java:1623)
> [2024-06-15 16:00:45,622] ERROR Encountered fatal fault: Unhandled error 
> initializing MetadataVersionPublisher(id=1) with a snapshot at offset 4 
> (org.apache.kafka.server.fault.ProcessTerminatingFaultHandler)
> org.apache.kafka.server.fault.FaultHandlerException: Broker configuration 
> does not support the cluster MetadataVersion
>     at scala.Predef$.require(Predef.scala:337)
>     at 
> kafka.server.KafkaConfig.validateWithMetadataVersion(KafkaConfig.scala:2545)
>     at 
> kafka.server.MetadataVersionConfigValidator.onMetadataVersionChanged(MetadataVersionConfigValidator.java:62)
>     at 
> kafka.server.MetadataVersionConfigValidator.onMetadataUpdate(MetadataVersionConfigValidator.java:55)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.initializeNewPublishers(MetadataLoader.java:309)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.lambda$scheduleInitializeNewPublishers$0(MetadataLoader.java:266)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>     at java.base/java.lang.Thread.run(Thread.java:1623)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Multiple 
> log directories (aka JBOD) are not supported in the current MetadataVersion 
> 3.6-IV2. Need 3.7-IV2 or higher
>     ... 10 more{code}
>  
> This will block combined node setting multiple log dirs upgrade to v3.7.1 or 
> later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16969: Log error if config conficts with MV [kafka]

2024-06-17 Thread via GitHub


showuon commented on PR #16366:
URL: https://github.com/apache/kafka/pull/16366#issuecomment-2174883941

   Backported to 3.7/3.8 branches. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16969: Log error if config conficts with MV [kafka]

2024-06-17 Thread via GitHub


showuon merged PR #16366:
URL: https://github.com/apache/kafka/pull/16366


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16969: Log error if config conficts with MV [kafka]

2024-06-17 Thread via GitHub


showuon commented on PR #16366:
URL: https://github.com/apache/kafka/pull/16366#issuecomment-2174879458

   Failed tests are unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] improve log description of QuorumController [kafka]

2024-06-17 Thread via GitHub


chickenchickenlove commented on PR #15926:
URL: https://github.com/apache/kafka/pull/15926#issuecomment-2174863122

   Hey, mumrah.
   Thanks for your time 🙇‍♂️ 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-16988) InsufficientResourcesError in ConnectDistributedTest system test

2024-06-17 Thread Luke Chen (Jira)
Luke Chen created KAFKA-16988:
-

 Summary: InsufficientResourcesError in ConnectDistributedTest 
system test
 Key: KAFKA-16988
 URL: https://issues.apache.org/jira/browse/KAFKA-16988
 Project: Kafka
  Issue Type: Bug
Reporter: Luke Chen
Assignee: Luke Chen


Saw InsufficientResourcesError when running 
`ConnectDistributedTest#test_exactly_once_source` system test.

 
{code:java}
InsufficientResourcesError('linux nodes requested: 1. linux 
nodes available: 0') Traceback (most recent call last): File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
186, in _do_run data = self.run_test() File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
246, in run_test return self.test_context.function(self.test) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) File 
"/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
line 928, in test_exactly_once_source consumer_validator = 
ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
consumer_timeout_ms=1000, print_key=True) File 
"/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
__init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
 line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
num_nodes, cluster_spec, *args, **kwargs) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
107, in __init__ self.allocate_nodes() File 
"/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", line 54, 
in alloc allocated = self.do_alloc(cluster_spec) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/finite_subcluster.py", 
line 37, in do_alloc good_nodes, bad_nodes = 
self._available_nodes.remove_spec(cluster_spec) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/node_container.py", 
line 131, in remove_spec raise InsufficientResourcesError(err) 
ducktape.cluster.node_container.InsufficientResourcesError: linux nodes 
requested: 1. linux nodes available: 0 

InsufficientResourcesError('linux nodes requested: 1. linux 
nodes available: 0') Traceback (most recent call last): File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
186, in _do_run data = self.run_test() File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
246, in run_test return self.test_context.function(self.test) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) File 
"/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
line 928, in test_exactly_once_source consumer_validator = 
ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
consumer_timeout_ms=1000, print_key=True) File 
"/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
__init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
 line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
num_nodes, cluster_spec, *args, **kwargs) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
107, in __init__ self.allocate_nodes() File 
"/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", line 54, 
in alloc allocated = self.do_alloc(cluster_spec) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/finite_subcluster.py", 
line 37, in do_alloc good_nodes, bad_nodes = 
self._available_nodes.remove_spec(cluster_spec) File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/node_container.py", 
line 131, in remove_spec raise InsufficientResourcesError(err) 
ducktape.cluster.node_container.InsufficientResourcesError: linux nodes 
requested: 1. linux nodes available: 0 
...{code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16969: revert KAFKA-16606 [kafka]

2024-06-17 Thread via GitHub


showuon closed pull request #16352: KAFKA-16969: revert KAFKA-16606
URL: https://github.com/apache/kafka/pull/16352


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16969: revert KAFKA-16606 [kafka]

2024-06-17 Thread via GitHub


showuon commented on PR #16352:
URL: https://github.com/apache/kafka/pull/16352#issuecomment-2174853262

   Closed in favor of https://github.com/apache/kafka/pull/16366. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-16956) Broker-side ability to subscribe to record delete events

2024-06-17 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 reassigned KAFKA-16956:
--

Assignee: dujian0068

> Broker-side ability to subscribe to record delete events
> 
>
> Key: KAFKA-16956
> URL: https://issues.apache.org/jira/browse/KAFKA-16956
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Luke Chen
>Assignee: dujian0068
>Priority: Major
>  Labels: need-kip
>
> In some cases it would be useful for systems outside Kafka to have the 
> ability to know when Kafka deletes records (tombstoning or retention).
> In general the use-case is where there is a desire to link the lifecycle of a 
> record in a third party system (database or filesystem etc) to the lifecycle 
> of the record in Kafka.
> A concrete use-case:  a system using Kafka to distribute video clips + 
> metadata.  The binary content is too big to store in Kafka so the publishing 
> application caches the content in cloud storage and publishes a record 
> containing a S3 url to the video clip.  The desire is to have a mechanism to 
> remove the clip from cloud storage at the same time the record is expunged 
> from Kafka by retention or tombstoning.  Currently there is no practical way 
> to achieve this.
> h2. Desired solution
> A pluggable broker-side mechanism that is informed as records are being 
> compacted away or deleted.  The API would expose the topic from which the 
> record is being deleted, the record key, record headers, timestamp and 
> (possibly) record value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16954: fix consumer close to release assignment in background [kafka]

2024-06-17 Thread via GitHub


lianetm commented on PR #16376:
URL: https://github.com/apache/kafka/pull/16376#issuecomment-2174790856

   Build completed with 4 unrelated failures:
   
   > Build / JDK 8 and Scala 2.12 / testSyncTopicConfigs() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest
   > Build / JDK 8 and Scala 2.12 / testTaskRequestWithOldStartMsGetsUpdated() 
– org.apache.kafka.trogdor.coordinator.CoordinatorTest
   > Build / JDK 21 and Scala 2.13 / 
"testNoConsumeWithDescribeAclViaSubscribe(String).quorum=kraft" – 
kafka.api.DelegationTokenEndToEndAuthorizationWithOwnerTest
   > Build / JDK 21 and Scala 2.13 / testDescribeQuorumReplicationSuccessful 
[1] Type=Raft-Isolated, MetadataVersion=4.0-IV0,Security=PLAINTEXT – 
org.apache.kafka.tools.MetadataQuorumCommandTest


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-10787: Apply spotless to `streams:examples` and `streams-scala` [kafka]

2024-06-17 Thread via GitHub


gongxuanzhang opened a new pull request, #16378:
URL: https://github.com/apache/kafka/pull/16378

   This PR is sub PR from https://github.com/apache/kafka/pull/16097.
   It is part of a series of changes to progressively apply [spotless 
plugin(import-order)] across all modules. In this step, the plugin is activated 
in the 
   * stream:test-utils
   * streams:upgrade-system-tests-0100
   
   ## Module and history 
   
   > Please see the table below for the historical changes related to applying 
the Spotless plugin
   
   | module| apply |  related PR  |
   | -- | --- | --- |
   | :clients | ❌| future |
   | :connect:api | ✅| https://github.com/apache/kafka/pull/16299 |
   | :connect:basic-auth-extension | ✅| 
https://github.com/apache/kafka/pull/16299 |
   | :connect:file | ✅| https://github.com/apache/kafka/pull/16299 |
   | :connect:json | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:mirror | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:mirror-client | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:runtime | ❌| future |
   | :connect:test-plugins | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:transforms | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :core | ❌| future |
   | :examples |  ✅| https://github.com/apache/kafka/pull/16296 |
   | :generator |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :group-coordinator:group-coordinator-api | ✅   | 
https://github.com/apache/kafka/pull/16298 |
   | :group-coordinator | ✅  | https://github.com/apache/kafka/pull/16298 |
   | :jmh-benchmarks |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :log4j-appender |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :metadata | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :server | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :shell |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :storage | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :storage:storage-api | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :streams | ❌ | future |
   | :streams:examples |  ✅   | future |
   | :streams:streams-scala | ✅| future |
   | :streams:test-utils |  ✅ | https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0100 |  ✅ | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0101 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0102 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0110 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-10 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-11 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-20 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-21 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-22 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-23 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-24 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-25 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-26 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-27 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-28 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-30 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-31 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-32 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-33 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-34 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-35 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-36 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-37 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :trogdor |  ✅   | https://github.com/apache/kafka/pull/16296 |
   | :raft |  ✅| https://github.com/apache/kafka/pull/16278 |
   | :server-common | ✅| https://github.com/apache/kafka/pull/16172 |
   | :transaction-coordinator | ✅| 
https://github.com/apache/kafka/pull/16172 |
   | :tools |  ✅   |  https://github.com/apache/kafka/pull/16262  |
   | :tools:tools-api | ✅   | https://github.com/apache/kafka/pull/16262 |
   
   ## How to test:
   We can run `.

Re: [PR] KAFKA-16921: Migrate test of connect module to Junit5 (Runtime direct) [kafka]

2024-06-17 Thread via GitHub


gongxuanzhang commented on PR #16351:
URL: https://github.com/apache/kafka/pull/16351#issuecomment-2174753090

   I update it. plz take a look @chia7712 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855773#comment-17855773
 ] 

Vinicius Vieira dos Santos commented on KAFKA-16986:


[~jolshan] The only problem I currently see is the log, I took a look and 
actually many of our applications log this message and this pollutes the logs 
from time to time, I don't know exactly the process that triggers this log, but 
it is displayed several times during the pod life cycle not only at startup, 
the print I added to the issue shows this, for the same topic and the same 
partition there are several logs at different times in the same pod, without 
restarts or anything like that and I think it's important to emphasize that 
throughout In the life cycle of these applications we only have one producer 
instance that remains the same throughout the life of the pod. I even validated 
the code of our applications to check that there wasn't a situation where the 
producer kept being destroyed and created again.

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16921: Migrate test of Stream module to Junit5 (Stream state) [kafka]

2024-06-17 Thread via GitHub


m1a2st commented on code in PR #16356:
URL: https://github.com/apache/kafka/pull/16356#discussion_r1643586369


##
streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBVersionedStoreSegmentValueFormatterTest.java:
##
@@ -18,61 +18,49 @@
 
 import static org.hamcrest.CoreMatchers.equalTo;
 import static org.hamcrest.MatcherAssert.assertThat;
-import static org.junit.Assert.assertThrows;
+import static org.junit.jupiter.api.Assertions.assertThrows;
 
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.stream.Stream;
+
 import 
org.apache.kafka.streams.state.internals.RocksDBVersionedStoreSegmentValueFormatter.SegmentValue;
 import 
org.apache.kafka.streams.state.internals.RocksDBVersionedStoreSegmentValueFormatter.SegmentValue.SegmentSearchResult;
-import org.junit.Test;
-import org.junit.experimental.runners.Enclosed;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
+import org.junit.jupiter.params.ParameterizedTest;
+import org.junit.jupiter.params.provider.Arguments;
+import org.junit.jupiter.params.provider.MethodSource;
 
-@RunWith(Enclosed.class)
 public class RocksDBVersionedStoreSegmentValueFormatterTest {
 
 /**
  * Non-exceptional scenarios which are expected to occur during regular 
store operation.
  */
-@RunWith(Parameterized.class)
 public static class ExpectedCasesTest {

Review Comment:
   @chia7712, Thanks for your comment, I don't really understand that I should 
extract which test case? Could you talk more details, Thanks for your patience.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643573869


##
core/src/test/scala/unit/kafka/coordinator/group/GroupCoordinatorTest.scala:
##
@@ -92,6 +92,7 @@ class GroupCoordinatorTest {
   private val protocolSuperset = List((protocolName, metadata), ("roundrobin", 
metadata))
   private val requireStable = true
   private var groupPartitionId: Int = -1
+  val groupMetadataTopicId = Uuid.fromString("JaTH2JYK2ed2GzUapg8tgg")

Review Comment:
   nit: any reason we choose this over a random uuid?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643571081


##
core/src/main/scala/kafka/server/KafkaApis.scala:
##
@@ -608,40 +608,55 @@ class KafkaApis(val requestChannel: RequestChannel,
   }
 }
 
-val unauthorizedTopicResponses = mutable.Map[TopicPartition, 
PartitionResponse]()
-val nonExistingTopicResponses = mutable.Map[TopicPartition, 
PartitionResponse]()
-val invalidRequestResponses = mutable.Map[TopicPartition, 
PartitionResponse]()
-val authorizedRequestInfo = mutable.Map[TopicPartition, MemoryRecords]()
+val unauthorizedTopicResponses = mutable.Map[TopicIdPartition, 
PartitionResponse]()
+val nonExistingTopicResponses = mutable.Map[TopicIdPartition, 
PartitionResponse]()
+val invalidRequestResponses = mutable.Map[TopicIdPartition, 
PartitionResponse]()
+val authorizedRequestInfo = mutable.Map[TopicIdPartition, MemoryRecords]()
+val topicIdToPartitionData = new mutable.ArrayBuffer[(TopicIdPartition, 
ProduceRequestData.PartitionProduceData)]
+
+produceRequest.data.topicData.forEach { topic =>
+  topic.partitionData.forEach { partition =>
+val (topicName, topicId) = if (produceRequest.version() >= 12) {
+  (metadataCache.getTopicName(topic.topicId).getOrElse(topic.name), 
topic.topicId())
+} else {
+  (topic.name(), metadataCache.getTopicId(topic.name()))
+}
+
+val topicPartition = new TopicPartition(topicName, partition.index())
+if (topicName == null || topicName.isEmpty)

Review Comment:
   ditto here -- do we expect both the empty string and null?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16772: Introduce kraft.version to support KIP-853 [kafka]

2024-06-17 Thread via GitHub


cmccabe commented on PR #16230:
URL: https://github.com/apache/kafka/pull/16230#issuecomment-2174664455

   I was originally going to have two records: one `KRaftVersionRecord` for the 
raft layer, and one `FeatureRecord` for the metadata layer, encoding the same 
data. The advantage of doing it this way is that it will work well with the 
current code structure. For example, we have code that validates that 
FeaturesImage can dump its state and restore that state. If FeaturesImage is 
taking input from outside of the list of metadata records, that invariant is 
broken. We also currently don't propagate control records to the metadata 
layer, so that is something we'd have to consider changing.
   
   The big disadvantage of having two records rather than one is the state can 
get out of sync. Which I do think is a real risk, especially when we are 
changing things.
   
   I realize writing the record at the Raft layer seems useful to you, but in 
the metadata layer, the thought that `read(image.write) != image` does not 
spark joy. If this is going to be handled by the Raft layer and not the 
metadata layer, then maybe it should be excluded from the image altogether. 
We'll have to talk about that.
   
   In the meantime, I merged in trunk. There were import statement conflicts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643566934


##
clients/src/test/java/org/apache/kafka/common/requests/ProduceRequestTest.java:
##
@@ -120,15 +121,38 @@ public void testBuildWithCurrentMessageFormat() {
 ProduceRequest.Builder requestBuilder = 
ProduceRequest.forMagic(RecordBatch.CURRENT_MAGIC_VALUE,
 new ProduceRequestData()
 .setTopicData(new 
ProduceRequestData.TopicProduceDataCollection(Collections.singletonList(
-new 
ProduceRequestData.TopicProduceData().setName("test").setPartitionData(Collections.singletonList(
-new 
ProduceRequestData.PartitionProduceData().setIndex(9).setRecords(builder.build()
+new 
ProduceRequestData.TopicProduceData()
+.setName("test")

Review Comment:
   will we have a case we write both?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643566042


##
clients/src/test/java/org/apache/kafka/common/requests/ProduceRequestTest.java:
##
@@ -120,15 +121,38 @@ public void testBuildWithCurrentMessageFormat() {
 ProduceRequest.Builder requestBuilder = 
ProduceRequest.forMagic(RecordBatch.CURRENT_MAGIC_VALUE,
 new ProduceRequestData()
 .setTopicData(new 
ProduceRequestData.TopicProduceDataCollection(Collections.singletonList(
-new 
ProduceRequestData.TopicProduceData().setName("test").setPartitionData(Collections.singletonList(
-new 
ProduceRequestData.PartitionProduceData().setIndex(9).setRecords(builder.build()

Review Comment:
   nit: should we adjust the spacing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643566042


##
clients/src/test/java/org/apache/kafka/common/requests/ProduceRequestTest.java:
##
@@ -120,15 +121,38 @@ public void testBuildWithCurrentMessageFormat() {
 ProduceRequest.Builder requestBuilder = 
ProduceRequest.forMagic(RecordBatch.CURRENT_MAGIC_VALUE,
 new ProduceRequestData()
 .setTopicData(new 
ProduceRequestData.TopicProduceDataCollection(Collections.singletonList(
-new 
ProduceRequestData.TopicProduceData().setName("test").setPartitionData(Collections.singletonList(
-new 
ProduceRequestData.PartitionProduceData().setIndex(9).setRecords(builder.build()

Review Comment:
   nit: should we adjust the spacing now that we have one less level?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643562963


##
clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java:
##
@@ -126,15 +134,15 @@ public ProduceRequest(ProduceRequestData 
produceRequestData, short version) {
 }
 
 // visible for testing
-Map partitionSizes() {
+Map partitionSizes() {
 if (partitionSizes == null) {
 // this method may be called by different thread (see the comment 
on data)
 synchronized (this) {
 if (partitionSizes == null) {
-Map tmpPartitionSizes = new 
HashMap<>();
+Map tmpPartitionSizes = new 
HashMap<>();
 data.topicData().forEach(topicData ->
 topicData.partitionData().forEach(partitionData ->
-tmpPartitionSizes.compute(new 
TopicPartition(topicData.name(), partitionData.index()),
+tmpPartitionSizes.compute(new 
TopicIdPartition(topicData.topicId(), partitionData.index(), topicData.name()),

Review Comment:
   will we ever need the name and ID for this data structure? I know fetch had 
something where we pass in a map to convert IDs to names if needed.
   
   Just want to make sure folks won't use this info expecting the name to be 
there. If we don't think it is needed, maybe just include a comment about it. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-17 Thread via GitHub


junrao commented on PR #16347:
URL: https://github.com/apache/kafka/pull/16347#issuecomment-2174646474

   @cmccabe : https://github.com/apache/kafka/pull/15673 is fixing a mistake 
that shouldn't be in 3.8.0. We should have bumped up the API version for 
ListOffset, but we didn't. To me, that seems a blocker for 3.8.0, right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643560948


##
clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java:
##
@@ -51,13 +52,20 @@ public static Builder forMagic(byte magic, 
ProduceRequestData data) {
 if (magic < RecordBatch.MAGIC_VALUE_V2) {
 minVersion = 2;
 maxVersion = 2;
+} else if (canNotSupportTopicId(data)) {
+minVersion = 3;
+maxVersion = 11;
 } else {
 minVersion = 3;
 maxVersion = ApiKeys.PRODUCE.latestVersion();
 }
 return new Builder(minVersion, maxVersion, data);
 }
 
+private static boolean canNotSupportTopicId(ProduceRequestData data) {
+return data.topicData().stream().anyMatch(d -> d.topicId() == null || 
d.topicId() == Uuid.ZERO_UUID);

Review Comment:
   I saw this null check in a few places. Is there a chance that this value is 
null or are we just trying to cover our bases?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10551: Add topic id support to produce request and response [kafka]

2024-06-17 Thread via GitHub


jolshan commented on code in PR #15968:
URL: https://github.com/apache/kafka/pull/15968#discussion_r1643560273


##
clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java:
##
@@ -891,11 +900,20 @@ private void sendProduceRequest(long now, int 
destination, short acks, int timeo
 // which is supporting the new magic version to one which doesn't, 
then we will need to convert.
 if (!records.hasMatchingMagic(minUsedMagic))
 records = batch.records().downConvert(minUsedMagic, 0, 
time).records();
-ProduceRequestData.TopicProduceData tpData = tpd.find(tp.topic());
+ProduceRequestData.TopicProduceData tpData = canUseTopicId ?
+tpd.find(tp.topic(), topicIds.get(tp.topic())) :
+tpd.find(new 
ProduceRequestData.TopicProduceData().setName(tp.topic()));
+
 if (tpData == null) {
-tpData = new 
ProduceRequestData.TopicProduceData().setName(tp.topic());
+tpData = new ProduceRequestData.TopicProduceData();
 tpd.add(tpData);
 }
+if (canUseTopicId) {

Review Comment:
   nit: would we also want this in the above block so we don't set each time?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Don't swallow validateReconfiguration exceptions [kafka]

2024-06-17 Thread via GitHub


cmccabe commented on code in PR #16346:
URL: https://github.com/apache/kafka/pull/16346#discussion_r1643559385


##
core/src/main/scala/kafka/server/DynamicBrokerConfig.scala:
##
@@ -640,8 +640,8 @@ class DynamicBrokerConfig(private val kafkaConfig: 
KafkaConfig) extends Logging
   reconfigurable.validateReconfiguration(newConfigs)
 } catch {
   case e: ConfigException => throw e
-  case _: Exception =>
-throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass}")
+  case e: Exception =>
+throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass} due to: $e")

Review Comment:
   Putting a password string into an exception is certainly a bug. If we're 
doing that, let's file a JIRA for it and fix ASAP.
   
   @rajinisivaram : Do you have an example of code doing this?
   
   There are a limited number of reconfigurables that deal with SASL / SSL so 
we should be able to look at them all and fix them if needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: add a note in v3.8.0 notable change for JBOD support for tiered storage [kafka]

2024-06-17 Thread via GitHub


satishd commented on PR #16369:
URL: https://github.com/apache/kafka/pull/16369#issuecomment-2174632152

   @jlprat @showuon Merged to trunk and 3.8 branches.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: add a note in v3.8.0 notable change for JBOD support for tiered storage [kafka]

2024-06-17 Thread via GitHub


satishd merged PR #16369:
URL: https://github.com/apache/kafka/pull/16369


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855758#comment-17855758
 ] 

Justine Olshan commented on KAFKA-16986:


Hmmm. So the earlier code block should be catching the common case of client 
startup where we saw this log spam. (current epoch is null).

 

I guess in the upgrade case, the client is alive and you won't have current 
epoch as null. This should only happen on the client one time during the 
upgrade. Once the upgrade is complete you shouldn't see this error again. I 
don't know if it makes sense to change this case since it is a legitimate 
resetting of the epoch on this upgrade but I do see the argument for the log 
spam being annoying. 

Is it sufficient that this should not be expected after the upgrade? (And let 
me know if it is seen after the upgrade is fully completed.)

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855758#comment-17855758
 ] 

Justine Olshan edited comment on KAFKA-16986 at 6/17/24 11:37 PM:
--

Hmmm. So the earlier code block should be catching the common case of client 
startup where we saw this log spam. (current epoch is null).

 

I guess in the upgrade case, the client is alive and you won't have current 
epoch as null. This should only happen on the client during the upgrade. Once 
the upgrade is complete you shouldn't see this error again. I don't know if it 
makes sense to change this case since it is a legitimate resetting of the epoch 
on this upgrade but I do see the argument for the log spam being annoying. 

Is it sufficient that this should not be expected after the upgrade? (And let 
me know if it is seen after the upgrade is fully completed.)


was (Author: jolshan):
Hmmm. So the earlier code block should be catching the common case of client 
startup where we saw this log spam. (current epoch is null).

 

I guess in the upgrade case, the client is alive and you won't have current 
epoch as null. This should only happen on the client one time during the 
upgrade. Once the upgrade is complete you shouldn't see this error again. I 
don't know if it makes sense to change this case since it is a legitimate 
resetting of the epoch on this upgrade but I do see the argument for the log 
spam being annoying. 

Is it sufficient that this should not be expected after the upgrade? (And let 
me know if it is seen after the upgrade is fully completed.)

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16977: Reapply dynamic remote configs after broker restart [kafka]

2024-06-17 Thread via GitHub


satishd commented on PR #16353:
URL: https://github.com/apache/kafka/pull/16353#issuecomment-2174601994

   Retriggered the CI job as one of the test runs timedout.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Don't swallow validateReconfiguration exceptions [kafka]

2024-06-17 Thread via GitHub


ahuang98 commented on code in PR #16346:
URL: https://github.com/apache/kafka/pull/16346#discussion_r1643521884


##
core/src/main/scala/kafka/server/DynamicBrokerConfig.scala:
##
@@ -640,8 +640,8 @@ class DynamicBrokerConfig(private val kafkaConfig: 
KafkaConfig) extends Logging
   reconfigurable.validateReconfiguration(newConfigs)
 } catch {
   case e: ConfigException => throw e
-  case _: Exception =>
-throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass}")
+  case e: Exception =>
+throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass} due to: $e")

Review Comment:
   This function already catches and re-throws any ConfigExceptions - so it 
seems a bit unlikely re-throwing the other exceptions missed will return 
config-related sensitive data. 
   However, to be safe 
   1. we could limit blast radius by additionally catching 
IllegalStateException. I've filtered through all the impls of 
`validateReconfiguration` quickly and this looks to be safe to do.
   2. change impls of `validateReconfiguration(Map configs)` to 
throw ConfigException where they might throw other exception types. e.g. 
SaslChannelBuilder currently will throw `IllegalStateException` when the 
SslFactory has not been configured yet - we could wrap this in a 
ConfigException.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16749: Implemented share fetch messages (KIP-932) [kafka]

2024-06-17 Thread via GitHub


apoorvmittal10 opened a new pull request, #16377:
URL: https://github.com/apache/kafka/pull/16377

   Share group consumers use the ShareFetch API to retrieve messages they've 
claimed (acquired records) from the leader brokers of share partitions.
   
   The replica manager provides an API to retrieve messages directly from the 
underlying topic partition. The implementation of the fetch messages uses 
replica manager to fetch messages from specific offset known by share partition 
leader.
   
   The requests are sent to a queue and processed asynchronously.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Don't swallow validateReconfiguration exceptions [kafka]

2024-06-17 Thread via GitHub


ahuang98 commented on code in PR #16346:
URL: https://github.com/apache/kafka/pull/16346#discussion_r1643505085


##
core/src/main/scala/kafka/server/DynamicBrokerConfig.scala:
##
@@ -640,8 +640,8 @@ class DynamicBrokerConfig(private val kafkaConfig: 
KafkaConfig) extends Logging
   reconfigurable.validateReconfiguration(newConfigs)
 } catch {
   case e: ConfigException => throw e
-  case _: Exception =>
-throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass}")
+  case e: Exception =>
+throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass} due to: $e")

Review Comment:
   Thanks for pointing this out, I'll make this more specific to the error case 
I'm trying to catch or change the error  case to throw ConfigException instead



##
core/src/main/scala/kafka/server/DynamicBrokerConfig.scala:
##
@@ -640,8 +640,8 @@ class DynamicBrokerConfig(private val kafkaConfig: 
KafkaConfig) extends Logging
   reconfigurable.validateReconfiguration(newConfigs)
 } catch {
   case e: ConfigException => throw e
-  case _: Exception =>
-throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass}")
+  case e: Exception =>
+throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass} due to: $e")

Review Comment:
   Thanks for pointing this out, I'll make this more specific to the error case 
I'm trying to catch or change the error case to throw ConfigException instead



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Don't swallow validateReconfiguration exceptions [kafka]

2024-06-17 Thread via GitHub


ahuang98 commented on code in PR #16346:
URL: https://github.com/apache/kafka/pull/16346#discussion_r1643505085


##
core/src/main/scala/kafka/server/DynamicBrokerConfig.scala:
##
@@ -640,8 +640,8 @@ class DynamicBrokerConfig(private val kafkaConfig: 
KafkaConfig) extends Logging
   reconfigurable.validateReconfiguration(newConfigs)
 } catch {
   case e: ConfigException => throw e
-  case _: Exception =>
-throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass}")
+  case e: Exception =>
+throw new ConfigException(s"Validation of dynamic config update of 
$updatedConfigNames failed with class ${reconfigurable.getClass} due to: $e")

Review Comment:
   Thanks for pointing this out, perhaps I'll make this more specific to the 
error I'm trying to catch



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16707: Kafka Kraft : using Principal Type in StandardACL in order to defined ACL with a notion of group without rewriting KafkaPrincipal of client by rules [kafka]

2024-06-17 Thread via GitHub


handfreezer commented on PR #16361:
URL: https://github.com/apache/kafka/pull/16361#issuecomment-2174527709

   Hello, When I'm running "./gradlew test" on my side from apache/kafka/trunk 
clone, I have failed tests.
   So Is there a way to know (an easy one?) to know which failed test in 
"continuous-integration/jenkins/pr-merge" I have to look ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-12652) connector doesnt accept transformation configuration

2024-06-17 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-12652.
-
Resolution: Duplicate

> connector doesnt accept transformation configuration
> 
>
> Key: KAFKA-12652
> URL: https://issues.apache.org/jira/browse/KAFKA-12652
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Affects Versions: 2.7.0
> Environment: ubuntu 20.04, strimzi
>Reporter: amit cahanovich
>Priority: Critical
>
> connector configuration:
> spec:
>  class: io.confluent.connect.jdbc.JdbcSourceConnector
>  tasksMax: 2
>  config:
>  connection.url: 
> ${file:/opt/kafka/external-configuration/connector-config/kafka_secrets.properties:stg2_CONNECTION_URL}
>  schema.pattern: 
> ${file:/opt/kafka/external-configuration/connector-config/kafka_secrets.properties:stg2_SCHEMA_PATERN}
>  connection.user: 
> ${file:/opt/kafka/external-configuration/connector-config/kafka_secrets.properties:stg2_CONNECTION_USER}
>  connection.password: 
> ${file:/opt/kafka/external-configuration/connector-config/kafka_secrets.properties:stg2_CONNECTION_PASSWORD}
>  mode: bulk
>  validate.non.null: false
>  table.types: TABLE
>  table.whitelist: jms_lib_source_of_lighting
>  topic.creation.default.replication.factor: 1
>  topic.creation.default.partitions: 1
>  transforms: MakeMap, InsertSource
>  transforms.MakeMap.type: org.apache.kafka.connect.transforms.HoistField$Value
>  transforms.MakeMap.field: line
>  transforms.InsertSource.type: 
> org.apache.kafka.connect.transforms.InsertField$Value
>  transforms.InsertSource.static.field: data_source
>  transforms.InsertSource.static.value: test-file-source
>  
> error:
>  value.converter = null
>  
> (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig)
>  [StartAndStopExecutor-connect-1-2]
> 2021-04-11 08:37:40,735 ERROR Failed to start task jdbc-mssql-connectora-0 
> (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-2]
> org.apache.kafka.connect.errors.ConnectException: 
> org.apache.kafka.common.config.ConfigException: Unknown configuration 
> 'transforms.MakeMap.type'
>  at 
> org.apache.kafka.connect.runtime.ConnectorConfig.transformations(ConnectorConfig.java:296)
>  at org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:605)
>  at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:555)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1251)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1700(DistributedHerder.java:127)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$10.call(DistributedHerder.java:1266)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$10.call(DistributedHerder.java:1262)
>  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.kafka.common.config.ConfigException: Unknown 
> configuration 'transforms.MakeMap.type'
>  at org.apache.kafka.common.config.AbstractConfig.get(AbstractConfig.java:159)
>  at 
> org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig.get(SourceConnectorConfig.java:57)
>  at 
> org.apache.kafka.connect.runtime.SourceConnectorConfig.get(SourceConnectorConfig.java:141)
>  at 
> org.apache.kafka.common.config.AbstractConfig.getClass(AbstractConfig.java:216)
>  at 
> org.apache.kafka.connect.runtime.ConnectorConfig.transformations(ConnectorConfig.java:281)
>  ... 10 more



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10568) transforms.InsertField.type

2024-06-17 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-10568.
-
Resolution: Duplicate

> transforms.InsertField.type
> ---
>
> Key: KAFKA-10568
> URL: https://issues.apache.org/jira/browse/KAFKA-10568
> Project: Kafka
>  Issue Type: Bug
> Environment: Debian, Kafka-2.13-2.6.0
>Reporter: Nikos Nikoloutsakos
>Priority: Trivial
>
> Hello,
> i'am trying to run a simple SMT for kafka-connect
>  
> {code:java}
> transforms=InsertField
> transforms.InsertField.type=org.apache.kafka.connect.transforms.InsertField$Value
> {code}
> But i get the following error
> {code:java}
> [2020-10-02 18:34:27,508] ERROR Failed to start task test-smst-0 
> (org.apache.kafka.connect.runtime.Worker:560)
> org.apache.kafka.connect.errors.ConnectException: 
> org.apache.kafka.common.config.ConfigException: Unknown configuration 
> 'transforms.InsertField.type'
> at 
> org.apache.kafka.connect.runtime.ConnectorConfig.transformations(ConnectorConfig.java:296)
> at 
> org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:605)
> at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:555)
> at 
> org.apache.kafka.connect.runtime.standalone.StandaloneHerder.createConnectorTasks(StandaloneHerder.java:333)
> at 
> org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:359)
> at 
> org.apache.kafka.connect.runtime.standalone.StandaloneHerder.lambda$null$3(StandaloneHerder.java:240)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.config.ConfigException: Unknown 
> configuration 'transforms.InsertField.type'
> at 
> org.apache.kafka.common.config.AbstractConfig.get(AbstractConfig.java:159)
> at 
> org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig.get(SourceConnectorConfig.java:57)
> at 
> org.apache.kafka.connect.runtime.SourceConnectorConfig.get(SourceConnectorConfig.java:141)
> at 
> org.apache.kafka.common.config.AbstractConfig.getClass(AbstractConfig.java:216)
> at 
> org.apache.kafka.connect.runtime.ConnectorConfig.transformations(ConnectorConfig.java:281)
> ... 12 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16862: Refactor ConsumerTaskTest to be deterministic and avoid tight loops [kafka]

2024-06-17 Thread via GitHub


xiaoqingwanga commented on PR #16303:
URL: https://github.com/apache/kafka/pull/16303#issuecomment-2174489650

   Hey, @gharris1727 , The code has been committed, please review it again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-17 Thread via GitHub


cmccabe commented on code in PR #16347:
URL: https://github.com/apache/kafka/pull/16347#discussion_r1643463801


##
server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java:
##
@@ -202,11 +202,12 @@ public enum MetadataVersion {
 // Add new fetch request version for KIP-951
 IBP_3_7_IV4(19, "3.7", "IV4", false),
 
-// Add ELR related supports (KIP-966).
-IBP_3_8_IV0(20, "3.8", "IV0", true),
+// New version for the Kafka 3.8.0 release.
+IBP_3_8_IV0(20, "3.8", "IV0", false),
 
+// Add ELR related supports (KIP-966).

Review Comment:
   I will add a comment.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16899 GroupRebalanceConfig: rebalanceTimeoutMs updated to commitTimeoutDuringReconciliation [kafka]

2024-06-17 Thread via GitHub


kirktrue commented on PR #16334:
URL: https://github.com/apache/kafka/pull/16334#issuecomment-2174472337

   My apologies, @linu-shibu.
   
   I think the consensus was to change 
[`MembershipManagerImpl`'s](https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java)
 `rebalanceTimeoutMs` instance variable to `commitTimeoutDuringReconciliation`. 
We don't want to change any other code at this point.
   
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16862: Refactor ConsumerTaskTest to be deterministic and avoid tight loops [kafka]

2024-06-17 Thread via GitHub


xiaoqingwanga commented on code in PR #16303:
URL: https://github.com/apache/kafka/pull/16303#discussion_r1643456559


##
storage/src/test/java/org/apache/kafka/server/log/remote/metadata/storage/ConsumerTaskTest.java:
##
@@ -309,49 +300,52 @@ public void testCanReprocessSkippedRecords() throws 
InterruptedException {
 // explicitly re-adding the records since MockConsumer drops them on 
poll.
 addRecord(consumer, metadataPartition, tpId1, 0);
 addRecord(consumer, metadataPartition, tpId0, 1);
+consumerTask.ingestRecords();
 // Waiting for all metadata records to be re-read from the first 
metadata partition number
-TestUtils.waitForCondition(() -> 
consumerTask.readOffsetForMetadataPartition(metadataPartition).equals(Optional.of(1L)),
 "Couldn't read record");
+assertEquals(Optional.of(1L), 
consumerTask.readOffsetForMetadataPartition(metadataPartition));
 // Verifying that all the metadata records from the first metadata 
partition were processed properly.
-TestUtils.waitForCondition(() -> handler.metadataCounter == 2, 
"Couldn't read record");
+assertEquals(2, handler.metadataCounter);
 }
 
 @Test
-public void testMaybeMarkUserPartitionsAsReady() throws 
InterruptedException {
+public void testMaybeMarkUserPartitionsAsReady() {
 final TopicIdPartition tpId = getIdPartitions("hello", 1).get(0);
 final int metadataPartition = partitioner.metadataPartition(tpId);
 
consumer.updateEndOffsets(Collections.singletonMap(toRemoteLogPartition(metadataPartition),
 2L));
 consumerTask.addAssignmentsForPartitions(Collections.singleton(tpId));
-thread.start();
+consumerTask.ingestRecords();
 
-TestUtils.waitForCondition(() -> 
consumerTask.isUserPartitionAssigned(tpId), "Waiting for " + tpId + " to be 
assigned");
+assertTrue(consumerTask.isUserPartitionAssigned(tpId), "Partition " + 
tpId + " has not been assigned");
 
assertTrue(consumerTask.isMetadataPartitionAssigned(metadataPartition));
 assertFalse(handler.isPartitionInitialized.containsKey(tpId));
 IntStream.range(0, 5).forEach(offset -> addRecord(consumer, 
metadataPartition, tpId, offset));
-TestUtils.waitForCondition(() -> 
consumerTask.readOffsetForMetadataPartition(metadataPartition).equals(Optional.of(4L)),
 "Couldn't read record");
+consumerTask.ingestRecords();
+assertEquals(Optional.of(4L), 
consumerTask.readOffsetForMetadataPartition(metadataPartition));
 assertTrue(handler.isPartitionInitialized.get(tpId));
 }
 
 @ParameterizedTest
 @CsvSource(value = {"0, 0", "500, 500"})
-public void testMaybeMarkUserPartitionAsReadyWhenTopicIsEmpty(long 
beginOffset,
-  long 
endOffset) throws InterruptedException {
+public void testMaybeMarkUserPartitionAsReadyWhenTopicIsEmpty(long 
beginOffset, long endOffset) {
 final TopicIdPartition tpId = getIdPartitions("world", 1).get(0);
 final int metadataPartition = partitioner.metadataPartition(tpId);
 
consumer.updateBeginningOffsets(Collections.singletonMap(toRemoteLogPartition(metadataPartition),
 beginOffset));
 
consumer.updateEndOffsets(Collections.singletonMap(toRemoteLogPartition(metadataPartition),
 endOffset));
 consumerTask.addAssignmentsForPartitions(Collections.singleton(tpId));
-thread.start();
+consumerTask.ingestRecords();
 
-TestUtils.waitForCondition(() -> 
consumerTask.isUserPartitionAssigned(tpId), "Waiting for " + tpId + " to be 
assigned");
+assertTrue(consumerTask.isUserPartitionAssigned(tpId), "Partition " + 
tpId + " has not been assigned");
 
assertTrue(consumerTask.isMetadataPartitionAssigned(metadataPartition));
-TestUtils.waitForCondition(() -> 
handler.isPartitionInitialized.containsKey(tpId),
-"should have initialized the partition");
+assertTrue(handler.isPartitionInitialized.containsKey(tpId), "Should 
have initialized the partition");
 
assertFalse(consumerTask.readOffsetForMetadataPartition(metadataPartition).isPresent());
 }
 
 @Test
 public void testConcurrentAccess() throws InterruptedException {
-thread.start();
+// Here we need to test concurrent access. When ConsumerTask is 
ingesting records,
+// we need to concurrently add partitions and perform close()
+new Thread(consumerTask).start();

Review Comment:
   I don't necessarily think it needs join(), because this thread will be shut 
down with close(). Even if, for some reason, it doesn't get closed, the 
resources will be freed when the test ends anyway. 
   Just to be extra careful, I'll give it a try.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the speci

Re: [PR] MINOR: consumer log fixes [kafka]

2024-06-17 Thread via GitHub


kirktrue commented on code in PR #16345:
URL: https://github.com/apache/kafka/pull/16345#discussion_r1643453935


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java:
##
@@ -785,6 +805,13 @@ void removeRequest() {
 }
 }
 
+// Visible for testing
+Optional lastEpochSentOnCommit() {
+return lastEpochSentOnCommit;
+}
+
+
+

Review Comment:
   Sorry for being the whitespace police 😆



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: consumer log fixes [kafka]

2024-06-17 Thread via GitHub


kirktrue commented on code in PR #16345:
URL: https://github.com/apache/kafka/pull/16345#discussion_r1643453077


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java:
##
@@ -362,7 +362,7 @@ private void transitionTo(MemberState nextState) {
 metricsManager.recordRebalanceStarted(time.milliseconds());
 }
 
-log.trace("Member {} with epoch {} transitioned from {} to {}.", 
memberId, memberEpoch, state, nextState);
+log.info("Member {} with epoch {} transitioned from {} to {}.", 
memberId, memberEpoch, state, nextState);

Review Comment:
   OK, yes, that totally makes sense 👍 Plus, when community members file issues 
against KIP-848 client work, it's better to have those logs included by 
default. Good call 😄 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16942) Use ConcurrentHashMap in RecordAccumulator#nodeStats

2024-06-17 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16942:
--
Component/s: producer 

> Use ConcurrentHashMap in RecordAccumulator#nodeStats
> 
>
> Key: KAFKA-16942
> URL: https://issues.apache.org/jira/browse/KAFKA-16942
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, producer 
>Reporter: Kuan Po Tseng
>Assignee: Kuan Po Tseng
>Priority: Major
>
> per discussed in 
> [https://github.com/apache/kafka/pull/16231#discussion_r1635345881]
> Through the ConcurrentMapBenchmark, we observed that in scenarios where write 
> operations (i.e., computeIfAbsent) constitute 10%, the get performance of 
> CopyOnWriteMap is lower compared to ConcurrentHashMap. However, when 
> iterating over entrySet and values, CopyOnWriteMap performs better than 
> ConcurrentHashMap.
> In RecordAccumulator#nodeStats, the computeIfAbsent method is rarely 
> triggered, and we only use the get method to read data. Therefore, switching 
> to ConcurrentHashMap would gain better performance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855730#comment-17855730
 ] 

Vinicius Vieira dos Santos edited comment on KAFKA-16986 at 6/17/24 9:26 PM:
-

[~jolshan] I have already updated my application to client version 3.6.1 and 
the log remains. In the commit that sent the indicated section it remains as 
info, will this be changed?

 

[https://github.com/apache/kafka/commit/6d9d65e6664153f8a7557ec31b5983eb0ac26782#diff-97c2911e6e1b97ed9b3c4e76531a321d8ea1fc6aa2c727c27b0a5e0ced893a2cR408]


was (Author: JIRAUSER305851):
[~jolshan] In the commit that sent the indicated section it remains as info, 
will this be changed?

 

https://github.com/apache/kafka/commit/6d9d65e6664153f8a7557ec31b5983eb0ac26782#diff-97c2911e6e1b97ed9b3c4e76531a321d8ea1fc6aa2c727c27b0a5e0ced893a2cR408

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: consumer log fixes [kafka]

2024-06-17 Thread via GitHub


lianetm commented on code in PR #16345:
URL: https://github.com/apache/kafka/pull/16345#discussion_r1643448438


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java:
##
@@ -362,7 +362,7 @@ private void transitionTo(MemberState nextState) {
 metricsManager.recordRebalanceStarted(time.milliseconds());
 }
 
-log.trace("Member {} with epoch {} transitioned from {} to {}.", 
memberId, memberEpoch, state, nextState);
+log.info("Member {} with epoch {} transitioned from {} to {}.", 
memberId, memberEpoch, state, nextState);

Review Comment:
   While troubleshooting different scenarios on the stress tests we're running, 
we always end up finding ourselves struggling to understand the member state 
just because we don't have this log info handy, so the intention was to move it 
up at least on this Preview stage to easily track the state machine. This 
should really only come out on events that we do care about (joining, leaving, 
reconciling, errors), but if on practice we see it ends up generating more 
noisy than the value it has we'll lower it down then. Makes sense?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: consumer log fixes [kafka]

2024-06-17 Thread via GitHub


lianetm commented on code in PR #16345:
URL: https://github.com/apache/kafka/pull/16345#discussion_r1643442351


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java:
##
@@ -785,6 +805,13 @@ void removeRequest() {
 }
 }
 
+// Visible for testing
+Optional lastEpochSentOnCommit() {
+return lastEpochSentOnCommit;
+}
+
+
+

Review Comment:
   Sure, removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-17 Thread via GitHub


cmccabe commented on code in PR #16347:
URL: https://github.com/apache/kafka/pull/16347#discussion_r1643441501


##
server-common/src/main/java/org/apache/kafka/server/common/GroupVersion.java:
##
@@ -22,7 +22,7 @@
 public enum GroupVersion implements FeatureVersion {
 
 // Version 1 enables the consumer rebalance protocol (KIP-848).
-GV_1(1, MetadataVersion.IBP_4_0_IV0, Collections.emptyMap());
+GV_1(1, MetadataVersion.IBP_3_9_IV0, Collections.emptyMap());

Review Comment:
   I can leave it attached to IBP_4_0_IV0.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16957) Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to work with CLASSIC and CONSUMER

2024-06-17 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16957:
--
Fix Version/s: 3.9.0

> Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to 
> work with CLASSIC and CONSUMER 
> -
>
> Key: KAFKA-16957
> URL: https://issues.apache.org/jira/browse/KAFKA-16957
> Project: Kafka
>  Issue Type: Test
>  Components: clients, consumer, unit tests
>Reporter: Chia-Ping Tsai
>Assignee: Chia Chuan Yu
>Priority: Minor
> Fix For: 3.9.0
>
>
> The `CLIENT_IDS` is a static variable, so the latter one will see previous 
> test results. We should clear it before testing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16957) Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to work with CLASSIC and CONSUMER

2024-06-17 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16957:
--
Component/s: clients
 consumer
 unit tests

> Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to 
> work with CLASSIC and CONSUMER 
> -
>
> Key: KAFKA-16957
> URL: https://issues.apache.org/jira/browse/KAFKA-16957
> Project: Kafka
>  Issue Type: Test
>  Components: clients, consumer, unit tests
>Reporter: Chia-Ping Tsai
>Assignee: Chia Chuan Yu
>Priority: Minor
>
> The `CLIENT_IDS` is a static variable, so the latter one will see previous 
> test results. We should clear it before testing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-17 Thread via GitHub


cmccabe commented on PR #16347:
URL: https://github.com/apache/kafka/pull/16347#issuecomment-2174438309

   > @cmccabe @AndrewJSchofield @dajac , there is another PR adding a new 
metadata version of 3.8.IV1(it will be the next production ready MV), it also 
tries to move KIP-966 from 3.8IV0 to 4.0IV0. 
https://github.com/apache/kafka/pull/15673/files
   The PR has been modified for quite some time, If we want to make KIP-966 to 
3.9IV0, I can do it in a separate PR.
   
   Thanks for pointing out this existing PR. As discussed on the mailnig list, 
3.8 is done at this point. No new features are going into it. The changes from 
#15673 can be done in 3.9-IV0 once this PR creates it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] 3.8 cherry-pick for KAFKA-16954 [kafka]

2024-06-17 Thread via GitHub


lianetm commented on PR #16376:
URL: https://github.com/apache/kafka/pull/16376#issuecomment-2174438045

   Hey @lucasbru , here is the PR cherry-picking only the required changes for 
3.8 for the consumer close fix. The conflicts were due to this client cleanup 
commit 
https://github.com/apache/kafka/commit/79ea7d6122ac4dbc1441e9efbddae44b1a9c93f9 
that is not in 3.8 (and it's not needed), so I left it out. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16480: Bump ListOffsets version, IBP version and mark last version of ListOffsets as unstable [kafka]

2024-06-17 Thread via GitHub


cmccabe commented on PR #15673:
URL: https://github.com/apache/kafka/pull/15673#issuecomment-2174437063

   3.8 is done at this point. These changes can be done in 3.9-IV0 once #16347 
is in.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16968: Make 3.8-IV0 a stable MetadataVersion and create 3.9-IV0 [kafka]

2024-06-17 Thread via GitHub


cmccabe commented on code in PR #16347:
URL: https://github.com/apache/kafka/pull/16347#discussion_r1643436916


##
server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java:
##
@@ -232,7 +233,7 @@ public enum MetadataVersion {
  * Think carefully before you update this value. ONCE A METADATA 
VERSION IS PRODUCTION,
  * IT CANNOT BE CHANGED.
  */
-public static final MetadataVersion LATEST_PRODUCTION = IBP_3_7_IV4;
+public static final MetadataVersion LATEST_PRODUCTION = IBP_3_8_IV0;

Review Comment:
   The whole PR needs to be backported to 3.8.
   
   But yes, this line as well :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16986:
--
Component/s: producer 

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] 3.8 cherry-pick for KAFKA-16954 [kafka]

2024-06-17 Thread via GitHub


lianetm opened a new pull request, #16376:
URL: https://github.com/apache/kafka/pull/16376

   Cherry-picking 2 commits required for fixing KAFKA-16954 for 3.8 (fix 
consumer close to release assignment in background)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16788 - Fix resource leakage during connector start() failure [kafka]

2024-06-17 Thread via GitHub


gharris1727 commented on code in PR #16095:
URL: https://github.com/apache/kafka/pull/16095#discussion_r1643426226


##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java:
##
@@ -231,6 +231,12 @@ private synchronized void onFailure(Throwable t) {
 if (this.state == State.FAILED)
 return;
 
+// Call stop() on the connector to release its resources. Connector
+// could fail in the start() method, which is why we call stop() on
+// INIT state as well.
+if (this.state == State.STARTED || this.state == State.INIT)
+connector.stop();

Review Comment:
   > I think leaving the resources allocated when the connector is in the 
FAILED state until a restart is fine.
   
   I created a follow-up for releasing resources early here: 
https://issues.apache.org/jira/browse/KAFKA-16987 so we can focus on the 
startup leak in this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-16987) Release connector & task resources when entering FAILED state

2024-06-17 Thread Greg Harris (Jira)
Greg Harris created KAFKA-16987:
---

 Summary: Release connector & task resources when entering FAILED 
state
 Key: KAFKA-16987
 URL: https://issues.apache.org/jira/browse/KAFKA-16987
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Reporter: Greg Harris


Connectors and Tasks will enter the FAILED state if an unexpected exception 
appears. Connectors and tasks also have many resources associated with them 
that must be explicitly cleaned up by calling close().

Currently, this cleanup only takes place when the connectors are requested to 
shut down and the state transits to UNASSIGNED. This cleanup can be done 
earlier, so that resources are not held while the connector or task is in the 
FAILED state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: consumer log fixes [kafka]

2024-06-17 Thread via GitHub


kirktrue commented on code in PR #16345:
URL: https://github.com/apache/kafka/pull/16345#discussion_r1643412421


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java:
##
@@ -785,6 +805,13 @@ void removeRequest() {
 }
 }
 
+// Visible for testing
+Optional lastEpochSentOnCommit() {
+return lastEpochSentOnCommit;
+}
+
+
+

Review Comment:
   Super nit: extra whitespace not needed, right?



##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java:
##
@@ -362,7 +362,7 @@ private void transitionTo(MemberState nextState) {
 metricsManager.recordRebalanceStarted(time.milliseconds());
 }
 
-log.trace("Member {} with epoch {} transitioned from {} to {}.", 
memberId, memberEpoch, state, nextState);
+log.info("Member {} with epoch {} transitioned from {} to {}.", 
memberId, memberEpoch, state, nextState);

Review Comment:
   This one seems like it would make for a noisy log. What about `DEBUG` as a 
compromise?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-10670) repo.maven.apache.org: Name or service not known

2024-06-17 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-10670.
-
Resolution: Cannot Reproduce

> repo.maven.apache.org: Name or service not known
> 
>
> Key: KAFKA-10670
> URL: https://issues.apache.org/jira/browse/KAFKA-10670
> Project: Kafka
>  Issue Type: Bug
>  Components: build
> Environment: Fedora 33, Aarch64 
>Reporter: Lutz Weischer
>Priority: Minor
>
> ./gradlew jar 
> fails: 
> > Configure project :
> Building project 'core' with Scala version 2.13.3
> Building project 'streams-scala' with Scala version 2.13.3
> > Task :clients:processMessages
> MessageGenerator: processed 121 Kafka message JSON files(s).
> > Task :clients:compileJava FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':clients:compileJava'.
> > Could not resolve all files for configuration ':clients:compileClasspath'.
>> Could not resolve org.xerial.snappy:snappy-java:1.1.7.7.
>  Required by:
>  project :clients
>   > Could not resolve org.xerial.snappy:snappy-java:1.1.7.7.
>  > Could not get resource 
> 'https://repo.maven.apache.org/maven2/org/xerial/snappy/snappy-java/1.1.7.7/snappy-java-1.1.7.7.pom'.
> > Could not HEAD 
> 'https://repo.maven.apache.org/maven2/org/xerial/snappy/snappy-java/1.1.7.7/snappy-java-1.1.7.7.pom'.
>> repo.maven.apache.org: Name or service not known
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output. Run with --scan to get full insights.
> * Get more help at https://help.gradle.org
> Deprecated Gradle features were used in this build, making it incompatible 
> with Gradle 7.0.
> Use '--warning-mode all' to show the individual deprecation warnings.
> See 
> https://docs.gradle.org/6.7/userguide/command_line_interface.html#sec:command_line_warnings
> BUILD FAILED in 21s
> 4 actionable tasks: 4 executed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855730#comment-17855730
 ] 

Vinicius Vieira dos Santos commented on KAFKA-16986:


[~jolshan] In the commit that sent the indicated section it remains as info, 
will this be changed?

 

https://github.com/apache/kafka/commit/6d9d65e6664153f8a7557ec31b5983eb0ac26782#diff-97c2911e6e1b97ed9b3c4e76531a321d8ea1fc6aa2c727c27b0a5e0ced893a2cR408

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855729#comment-17855729
 ] 

Justine Olshan commented on KAFKA-16986:


Hey there, this has been fixed for versions > 3.5 
[https://github.com/apache/kafka/commit/6d9d65e6664153f8a7557ec31b5983eb0ac26782]
 where it should become a debug level log

It doesn't indicate a problem. If you don't wish to upgrade, you can reduce the 
log level.

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-5736) Improve error message in Connect when all kafka brokers are down

2024-06-17 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-5736.

  Assignee: (was: Konstantine Karantasis)
Resolution: Won't Fix

This appears to be a third-party plugin implementation using the 
AbstractPartitionAssignor, and violating one of it's preconditions. There is no 
reasonable alternative behavior for the AbstractPartitionAssignor, and this is 
up to the third-party plugin developer to fix.

> Improve error message in Connect when all kafka brokers are down
> 
>
> Key: KAFKA-5736
> URL: https://issues.apache.org/jira/browse/KAFKA-5736
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Affects Versions: 0.11.0.0
>Reporter: Konstantine Karantasis
>Priority: Major
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Currently when all the Kafka brokers are down, Kafka Connect is failing with 
> a pretty unintuitive message when it tries to, for instance, reconfigure 
> tasks. 
> Example output: 
> {code:java}
> [2017-08-15 19:12:09,959] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> java.lang.IllegalArgumentException: CircularIterator can only be used on 
> non-empty lists
> at 
> org.apache.kafka.common.utils.CircularIterator.(CircularIterator.java:29)
> at 
> org.apache.kafka.clients.consumer.RoundRobinAssignor.assign(RoundRobinAssignor.java:61)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractPartitionAssignor.assign(AbstractPartitionAssignor.java:68)
> at 
> ... (connector code)
> at 
> org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:230)
> {code}
> The error message needs to be improved, since its root cause is the absence 
> kafka brokers for assignment. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Summary: After upgrading to Kafka 3.4.1, the producer constantly produces 
logs related to topicId changes  (was: After upgrading to Kafka 3.41, the 
producer constantly produces logs related to topicId changes)

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Description: 
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

We have some applications with around 15 topics and 40 partitions which means 
around 600 log lines when metadata updates occur

The main thing for me is to know if this could indicate a problem or if I can 
simply change the log level of the org.apache.kafka.clients.Metadata class to 
warn without worries

 

There are other reports of the same behavior like this:  
https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why

 

*Some log occurrences over an interval of about 7 hours, each block refers to 
an instance of the application in kubernetes*

 

!image.png!

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.

  was:
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

We have some applications with around 15 topics and 40 partitions which means 
around 600 log lines when metadata updates occur

The main thing for me is to know if this could indicate a problem or if I can 
simply change the log level of the org.apache.kafka.clients.Metadata class to 
warn without worries

*Some log occurrences over an interval of about 7 hours, each block refers to 
an instance of the application in kubernetes*

 

!image.png!

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.


> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Description: 
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

We have some applications with around 15 topics and 40 partitions which means 
around 600 log lines when metadata updates occur

The main thing for me is to know if this could indicate a problem or if I can 
simply change the log level of the org.apache.kafka.clients.Metadata class to 
warn without worries

*Some log occurrences over an interval of about 7 hours, each block refers to 
an instance of the application in kubernetes*

 

!image.png!

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.

  was:
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

As we have some applications with the objective of distributing messages that 
have around 15 topics and each one with 40 partitions, which generates 600 log 
lines per metadata update

The main thing for me is to know if this could indicate a problem or if I can 
simply change the log level of the org.apache.kafka.clients.Metadata class to 
warn without worries

*Some log occurrences over an interval of about 7 hours, each block refers to 
an instance of the application in kubernetes*

 

!image.png!

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.


> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Priority: Minor  (was: Major)

> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> As we have some applications with the objective of distributing messages that 
> have around 15 topics and each one with 40 partitions, which generates 600 
> log lines per metadata update
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Description: 
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

As we have some applications with the objective of distributing messages that 
have around 15 topics and each one with 40 partitions, which generates 600 log 
lines per metadata update

The main thing for me is to know if this could indicate a problem or if I can 
simply change the log level of the org.apache.kafka.clients.Metadata class to 
warn without worries

*Some log occurrences over an interval of about 7 hours, each block refers to 
an instance of the application in kubernetes*

 

!image.png!

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.

  was:
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

*Some log occurrences over an interval of about 7 hours, each block refers to 
an instance of the application in kubernetes*

 

!image.png!

 

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.


> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Major
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> As we have some applications with the objective of distributing messages that 
> have around 15 topics and each one with 40 partitions, which generates 600 
> log lines per metadata update
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12164) ssue when kafka connect worker pod restart, during creation of nested partition directories in hdfs file system.

2024-06-17 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-12164.
-
Resolution: Invalid

This issue appears to deal only with a Connect plugin, which is not supported 
by the Apache Kafka project. If/when an issue with the Connect framework is 
implicated, a new ticket may be opened with details about that issue.

> ssue when kafka connect worker pod restart, during creation of nested 
> partition directories in hdfs file system.
> 
>
> Key: KAFKA-12164
> URL: https://issues.apache.org/jira/browse/KAFKA-12164
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: kaushik srinivas
>Priority: Critical
>
> In our production labs, an issue is observed. Below is the sequence of the 
> same.
>  # hdfs connector is added to the connect worker.
>  # hdfs connector is creating folders in hdfs /test1=1/test2=2/
> Based on the custom partitioner. Here test1 and test2 are two separate nested 
> directories derived from multiple fields in the record using a custom 
> partitioner.
>  # Now kafka connect hdfs connector uses below function calls to create the 
> directories in the hdfs file system.
> fs.mkdirs(new Path(filename));
> ref: 
> [https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/storage/HdfsStorage.java]
> Now the important thing to note is that if mkdirs() is a non atomic operation 
> (i.e can result in partial execution if interrupted)
> then suppose the first directory ie test1 is created and just before creation 
> of test2 in hdfs happens if there is a restart to the connect worker pod. 
> Then the hdfs file system will remain with partial folders created for 
> partitions during the restart time frames.
> So we might have conditions in hdfs as below
> /test1=0/test2=0/
> /test1=1/
> /test1=2/test2=2
> /test1=3/test2=3
> So the second partition has a missing directory in it. And if hive 
> integration is enabled, hive metastore exceptions will occur since there is a 
> partition expected from hive table is missing for few partitions in hdfs.
> *This can occur to any connector with some ongoing non atomic operation and a 
> restart is triggered to kafka connect worker pod. This will result in some 
> partially completed states in the system and may cause issues for the 
> connector to continue its operation*.
> *This is a very critical issue and needs some attention on ways for handling 
> the same.*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Description: 
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

*Some log occurrences over an interval of about 7 hours, each block refers to 
an instance of the application in kubernetes*

 

!image.png!

 

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.

  was:
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.

 

 


> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Major
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
>  
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Attachment: image.png

> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Major
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
>  
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-6353) Connector status shows FAILED but actually task is in RUNNING status

2024-06-17 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-6353.

Resolution: Not A Bug

The REST API provides separate status for a connector and it's tasks. Any one 
of these components can fail independently, including the connector. There is 
no "summary" status for a connector and all of it's tasks, that is up to users 
and higher abstraction layers to define.

> Connector status shows FAILED but actually task is in RUNNING status
> 
>
> Key: KAFKA-6353
> URL: https://issues.apache.org/jira/browse/KAFKA-6353
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Affects Versions: 0.10.2.1
>Reporter: Chen He
>Priority: Major
>
> {"name":"test","connector":{"state":"FAILED","trace":"ERROR 
> MESSAGE","worker_id":"localhost:8083"},"tasks":[{"state":"RUNNING","id":0,"worker_id":"localhost:8083"}]}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Description: 
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

*My scenario:* 

*Application:* 

- Java: 21

- Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:* 

- Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 image

 

 

  was:
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

My scenario:

Application:
 * Java: 21
 * Client: 3.6.1, also tested on 3.0.1 and has the same behavior Broker: 
Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 image


> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Major
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
>  
> *My scenario:* 
> *Application:* 
> - Java: 21
> - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:* 
> - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Vieira dos Santos updated KAFKA-16986:
---
Description: 
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

*My scenario:*

*Application:*
 - Java: 21

 - Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:*
 - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
image

 

If you need any more details, please let me know.

 

 

  was:
When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

*My scenario:* 

*Application:* 

- Java: 21

- Client: 3.6.1, also tested on 3.0.1 and has the same behavior

*Broker:* 

- Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 image

 

 


> After upgrading to Kafka 3.41, the producer constantly produces logs related 
> to topicId changes
> ---
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Major
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
>  
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> If you need any more details, please let me know.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16986) After upgrading to Kafka 3.41, the producer constantly produces logs related to topicId changes

2024-06-17 Thread Vinicius Vieira dos Santos (Jira)
Vinicius Vieira dos Santos created KAFKA-16986:
--

 Summary: After upgrading to Kafka 3.41, the producer constantly 
produces logs related to topicId changes
 Key: KAFKA-16986
 URL: https://issues.apache.org/jira/browse/KAFKA-16986
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 3.0.1
Reporter: Vinicius Vieira dos Santos


When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that the 
applications began to log the message "{*}Resetting the last seen epoch of 
partition PAYMENTS-0 to 0 since the associated topicId changed from null to 
szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
behavior is not expected because the topic was not deleted and recreated so it 
should simply use the cached data and not go through this client log line.

 

My scenario:

Application:
 * Java: 21
 * Client: 3.6.1, also tested on 3.0.1 and has the same behavior Broker: 
Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 image



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16936 : Upgrade slf4j, sys property for provider [kafka]

2024-06-17 Thread via GitHub


gharris1727 commented on code in PR #16324:
URL: https://github.com/apache/kafka/pull/16324#discussion_r1643357210


##
bin/kafka-run-class.sh:
##
@@ -243,6 +243,14 @@ fi
 (( WINDOWS_OS_FORMAT )) && LOG_DIR=$(cygpath --path --mixed "${LOG_DIR}")
 KAFKA_LOG4J_CMD_OPTS="-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS"
 
+# slf4j provider settings
+if [ -z "$SLF4J_PROVIDER" ]; then

Review Comment:
   nit: Should this be KAFKA_SLF4J_PROVIDER? Or is this an idiomatic variable 
that we are following by convention?



##
bin/kafka-run-class.sh:
##
@@ -243,6 +243,14 @@ fi
 (( WINDOWS_OS_FORMAT )) && LOG_DIR=$(cygpath --path --mixed "${LOG_DIR}")
 KAFKA_LOG4J_CMD_OPTS="-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS"
 
+# slf4j provider settings
+if [ -z "$SLF4J_PROVIDER" ]; then
+  SLF4J_PROVIDER="org.slf4j.reload4j.Reload4jServiceProvider"
+fi
+
+# Add the slf4j provider to KAFKA_LOG4J_CMD_OPTS
+KAFKA_LOG4J_CMD_OPTS="$KAFKA_LOG4J_CMD_OPTS -Dslf4j.provider=$SLF4J_PROVIDER"

Review Comment:
   This is a new system property, so KAFKA_LOG4J_CMD_OPTS shouldn't already 
contain -Dslf4j.provider
   👍 



##
build.gradle:
##
@@ -928,14 +928,15 @@ project(':server') {
 implementation libs.jacksonDatabind
 
 implementation libs.slf4jApi
+implementation libs.slf4jReload4j

Review Comment:
   This is the only dependency change that isn't a just a version change.
   
   Based on my understanding, we should only include slf4jReload4j in 
situations when we are also providing reload4j. Otherwise we provide the 
binding but not the actual logger.
   
   So this should be compileOnly, like libs.reload4j? Or not at all, like 
`:core`? I'm not exactly sure.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   4   >