[jira] [Updated] (KAFKA-17020) After enabling tiered storage, occasional residual logs are left in the replica

2024-06-21 Thread Jianbin Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Chen updated KAFKA-17020:
-
Description: 
After enabling tiered storage, occasional residual logs are left in the replica.
Based on the observed phenomenon, the index values of the rolled-out logs 
generated by the replica and the leader are not the same. As a result, the logs 
uploaded to S3 at the same time do not include the corresponding log files on 
the replica side, making it impossible to delete the local logs.
[!https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E!|https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E]
leader config:
{code:java}
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache

Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368
rsm.config.fetch.chunk.cache.retention.ms=120
# # Prefetching size, 16 MiB here:
rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
rsm.config.storage.backend.class=io.aiven.kafka.tieredstorage.storage.s3.S3Storage
rsm.config.storage.s3.bucket.name=
rsm.config.storage.s3.region=us-west-1
rsm.config.storage.aws.secret.access.key=
rsm.config.storage.aws.access.key.id=
rsm.config.chunk.size=8388608
remote.log.storage.manager.class.path=/home/admin/core-0.0.1-SNAPSHOT/:/home/admin/s3-0.0.1-SNAPSHOT/
remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
remote.log.metadata.manager.class.name=org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.listener.name=PLAINTEXT
rsm.config.upload.rate.limit.bytes.per.second=31457280
{code}
 replica config:
{code:java}
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
#remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
# Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368
rsm.config.fetch.chunk.cache.retention.ms=120
# # # Prefetching size, 16 MiB here:
rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
rsm.config.stor

[jira] [Updated] (KAFKA-17020) After enabling tiered storage, occasional residual logs are left in the replica

2024-06-21 Thread Jianbin Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Chen updated KAFKA-17020:
-
Description: 
After enabling tiered storage, occasional residual logs are left in the replica.
Based on the observed phenomenon, the index values of the rolled-out logs 
generated by the replica and the leader are not the same. As a result, the logs 
uploaded to S3 at the same time do not include the corresponding log files on 
the replica side, making it impossible to delete the local logs.
[!https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E!|https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E]
leader config:
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
 # Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368
rsm.config.fetch.chunk.cache.retention.ms=120
 # # # Prefetching size, 16 MiB here:
rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
rsm.config.storage.backend.class=io.aiven.kafka.tieredstorage.storage.s3.S3Storage
rsm.config.storage.s3.bucket.name=
rsm.config.storage.s3.region=us-west-1
rsm.config.storage.aws.secret.access.key=
rsm.config.storage.aws.access.key.id=
rsm.config.chunk.size=8388608
remote.log.storage.manager.class.path=/home/admin/core-0.0.1-SNAPSHOT/{*}:/home/admin/s3-0.0.1-SNAPSHOT/{*}
remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
remote.log.metadata.manager.class.name=org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.listener.name=PLAINTEXT
rsm.config.upload.rate.limit.bytes.per.second=31457280
replica config:
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
#remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
 # Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368
rsm.config.fetch.chunk.cache.retention.ms=120
 # # # Prefetching size, 16 MiB here:
rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
rsm.config.storage.backend.class=i

[jira] (KAFKA-17020) After enabling tiered storage, occasional residual logs are left in the replica

2024-06-21 Thread Jianbin Chen (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-17020 ]


Jianbin Chen deleted comment on KAFKA-17020:
--

was (Author: jianbin):
Restarting does not resolve this issue. The only solution is to delete the log 
folder corresponding to the replica where the log segment anomaly occurred and 
then resynchronize from the leader.
![image](https://github.com/Aiven-Open/tiered-storage-for-apache-kafka/assets/19943636/7256c156-6e90-4799-b0cf-a48c247c5b51)

> After enabling tiered storage, occasional residual logs are left in the 
> replica
> ---
>
> Key: KAFKA-17020
> URL: https://issues.apache.org/jira/browse/KAFKA-17020
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 3.7.0
>Reporter: Jianbin Chen
>Priority: Major
>
> After enabling tiered storage, occasional residual logs are left in the 
> replica.
> Based on the observed phenomenon, the index values of the rolled-out logs 
> generated by the replica and the leader are not the same. As a result, the 
> logs uploaded to S3 at the same time do not include the corresponding log 
> files on the replica side, making it impossible to delete the local logs.
> [!https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E!|https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E]
> leader config:
> num.partitions=3
> default.replication.factor=2
> delete.topic.enable=true
> auto.create.topics.enable=false
> num.recovery.threads.per.data.dir=1
> offsets.topic.replication.factor=3
> transaction.state.log.replication.factor=2
> transaction.state.log.min.isr=1
> offsets.retention.minutes=4320
> log.roll.ms=8640
> log.local.retention.ms=60
> log.segment.bytes=536870912
> num.replica.fetchers=1
> log.retention.ms=1581120
> remote.log.manager.thread.pool.size=4
> remote.log.reader.threads=4
> remote.log.metadata.topic.replication.factor=3
> remote.log.storage.system.enable=true
> remote.log.metadata.topic.retention.ms=18000
> rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
> rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
>  # Pick some cache size, 16 GiB here:
> rsm.config.fetch.chunk.cache.size=34359738368
> rsm.config.fetch.chunk.cache.retention.ms=120
>  # # # Prefetching size, 16 MiB here:
> rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
> rsm.config.storage.backend.class=io.aiven.kafka.tieredstorage.storage.s3.S3Storage
> rsm.config.storage.s3.bucket.name=
> rsm.config.storage.s3.region=us-west-1
> rsm.config.storage.aws.secret.access.key=
> rsm.config.storage.aws.access.key.id=
> rsm.config.chunk.size=8388608
> remote.log.storage.manager.class.path=/home/admin/core-0.0.1-SNAPSHOT/{*}:/home/admin/s3-0.0.1-SNAPSHOT/{*}
> remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
> remote.log.metadata.manager.class.name=org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
> remote.log.metadata.manager.listener.name=PLAINTEXT
> rsm.config.upload.rate.limit.bytes.per.second=31457280
> replica config:
> num.partitions=3
> default.replication.factor=2
> delete.topic.enable=true
> auto.create.topics.enable=false
> num.recovery.threads.per.data.dir=1
> offsets.topic.replication.factor=3
> transaction.state.log.replication.facto

[jira] [Updated] (KAFKA-17020) After enabling tiered storage, occasional residual logs are left in the replica

2024-06-21 Thread Jianbin Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Chen updated KAFKA-17020:
-
Description: 
After enabling tiered storage, occasional residual logs are left in the replica.
Based on the observed phenomenon, the index values of the rolled-out logs 
generated by the replica and the leader are not the same. As a result, the logs 
uploaded to S3 at the same time do not include the corresponding log files on 
the replica side, making it impossible to delete the local logs.
[!https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E!|https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E]
leader config:
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
 # Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368
rsm.config.fetch.chunk.cache.retention.ms=120
 # # # Prefetching size, 16 MiB here:
rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
rsm.config.storage.backend.class=io.aiven.kafka.tieredstorage.storage.s3.S3Storage
rsm.config.storage.s3.bucket.name=
rsm.config.storage.s3.region=us-west-1
rsm.config.storage.aws.secret.access.key=
rsm.config.storage.aws.access.key.id=
rsm.config.chunk.size=8388608
remote.log.storage.manager.class.path=/home/admin/core-0.0.1-SNAPSHOT/{*}:/home/admin/s3-0.0.1-SNAPSHOT/{*}
remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
remote.log.metadata.manager.class.name=org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.listener.name=PLAINTEXT
rsm.config.upload.rate.limit.bytes.per.second=31457280
replica config:
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
#remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
 # Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368
rsm.config.fetch.chunk.cache.retention.ms=120
 # # # Prefetching size, 16 MiB here:
rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
rsm.config.storage.backend.class=i

[jira] [Commented] (KAFKA-17020) After enabling tiered storage, occasional residual logs are left in the replica

2024-06-21 Thread Jianbin Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856903#comment-17856903
 ] 

Jianbin Chen commented on KAFKA-17020:
--

Restarting does not resolve this issue. The only solution is to delete the log 
folder corresponding to the replica where the log segment anomaly occurred and 
then resynchronize from the leader.
![image](https://github.com/Aiven-Open/tiered-storage-for-apache-kafka/assets/19943636/7256c156-6e90-4799-b0cf-a48c247c5b51)

> After enabling tiered storage, occasional residual logs are left in the 
> replica
> ---
>
> Key: KAFKA-17020
> URL: https://issues.apache.org/jira/browse/KAFKA-17020
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 3.7.0
>Reporter: Jianbin Chen
>Priority: Major
>
> After enabling tiered storage, occasional residual logs are left in the 
> replica.
> Based on the observed phenomenon, the index values of the rolled-out logs 
> generated by the replica and the leader are not the same. As a result, the 
> logs uploaded to S3 at the same time do not include the corresponding log 
> files on the replica side, making it impossible to delete the local logs.
> [!https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E!|https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E]
> leader config:
> num.partitions=3
> default.replication.factor=2
> delete.topic.enable=true
> auto.create.topics.enable=false
> num.recovery.threads.per.data.dir=1
> offsets.topic.replication.factor=3
> transaction.state.log.replication.factor=2
> transaction.state.log.min.isr=1
> offsets.retention.minutes=4320
> log.roll.ms=8640
> log.local.retention.ms=60
> log.segment.bytes=536870912
> num.replica.fetchers=1
> log.retention.ms=1581120
> remote.log.manager.thread.pool.size=4
> remote.log.reader.threads=4
> remote.log.metadata.topic.replication.factor=3
> remote.log.storage.system.enable=true
> remote.log.metadata.topic.retention.ms=18000
> rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
> rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
> # Pick some cache size, 16 GiB here:
> rsm.config.fetch.chunk.cache.size=34359738368
> rsm.config.fetch.chunk.cache.retention.ms=120
> # # # Prefetching size, 16 MiB here:
> rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
> rsm.config.storage.backend.class=io.aiven.kafka.tieredstorage.storage.s3.S3Storage
> rsm.config.storage.s3.bucket.name=
> rsm.config.storage.s3.region=us-west-1
> rsm.config.storage.aws.secret.access.key=
> rsm.config.storage.aws.access.key.id=
> rsm.config.chunk.size=8388608
> remote.log.storage.manager.class.path=/home/admin/core-0.0.1-SNAPSHOT/*:/home/admin/s3-0.0.1-SNAPSHOT/*
> remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
> remote.log.metadata.manager.class.name=org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
> remote.log.metadata.manager.listener.name=PLAINTEXT
> rsm.config.upload.rate.limit.bytes.per.second=31457280
> replica config:
> num.partitions=3
> default.replication.factor=2
> delete.topic.enable=true
> auto.create.topics.enable=false
> num.recovery.threads.per.data.dir=1
> offs

[jira] [Created] (KAFKA-17020) After enabling tiered storage, occasional residual logs are left in the replica

2024-06-21 Thread Jianbin Chen (Jira)
Jianbin Chen created KAFKA-17020:


 Summary: After enabling tiered storage, occasional residual logs 
are left in the replica
 Key: KAFKA-17020
 URL: https://issues.apache.org/jira/browse/KAFKA-17020
 Project: Kafka
  Issue Type: Wish
Affects Versions: 3.7.0
Reporter: Jianbin Chen


After enabling tiered storage, occasional residual logs are left in the replica.
Based on the observed phenomenon, the index values of the rolled-out logs 
generated by the replica and the leader are not the same. As a result, the logs 
uploaded to S3 at the same time do not include the corresponding log files on 
the replica side, making it impossible to delete the local logs.
[!https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E!|https://private-user-images.githubusercontent.com/19943636/341939158-d0b87a7d-aca1-4700-b3e1-fceff0530c79.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwMzY1OTIsIm5iZiI6MTcxOTAzNjI5MiwicGF0aCI6Ii8xOTk0MzYzNi8zNDE5MzkxNTgtZDBiODdhN2QtYWNhMS00NzAwLWIzZTEtZmNlZmYwNTMwYzc5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIyVDA2MDQ1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3ZDQ2OGIxMmE3OGI2Njc2YzdkNzkwMzlhNmM5MzAxNjY0MWZiMzA2ZjgwNzgzM2JlYTMxMzM4Njk1NGI5MDYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Sdsvwn0dUi_p1dG0W_AvQY6Iqeimy_UZ8VldKUS1Q0E]
leader config:
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
# Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368
rsm.config.fetch.chunk.cache.retention.ms=120
# # # Prefetching size, 16 MiB here:
rsm.config.fetch.chunk.cache.prefetch.max.size=33554432
rsm.config.storage.backend.class=io.aiven.kafka.tieredstorage.storage.s3.S3Storage
rsm.config.storage.s3.bucket.name=
rsm.config.storage.s3.region=us-west-1
rsm.config.storage.aws.secret.access.key=
rsm.config.storage.aws.access.key.id=
rsm.config.chunk.size=8388608
remote.log.storage.manager.class.path=/home/admin/core-0.0.1-SNAPSHOT/*:/home/admin/s3-0.0.1-SNAPSHOT/*
remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
remote.log.metadata.manager.class.name=org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.listener.name=PLAINTEXT
rsm.config.upload.rate.limit.bytes.per.second=31457280
replica config:
num.partitions=3
default.replication.factor=2
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.retention.minutes=4320
log.roll.ms=8640
log.local.retention.ms=60
log.segment.bytes=536870912
num.replica.fetchers=1
log.retention.ms=1581120
remote.log.manager.thread.pool.size=4
remote.log.reader.threads=4
remote.log.metadata.topic.replication.factor=3
remote.log.storage.system.enable=true
#remote.log.metadata.topic.retention.ms=18000
rsm.config.fetch.chunk.cache.class=io.aiven.kafka.tieredstorage.fetch.cache.DiskChunkCache
rsm.config.fetch.chunk.cache.path=/data01/kafka-tiered-storage-cache
# Pick some cache size, 16 GiB here:
rsm.config.fetch.chunk.cache.size=34359738368

[jira] [Updated] (KAFKA-17019) Producer TimeoutException should include root cause

2024-06-21 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-17019:
--
Component/s: clients

> Producer TimeoutException should include root cause
> ---
>
> Key: KAFKA-17019
> URL: https://issues.apache.org/jira/browse/KAFKA-17019
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, producer 
>Reporter: Matthias J. Sax
>Priority: Major
>
> With KAFKA-16965 we added a "root cause" to some `TimeoutException` thrown by 
> the producer. However, it's only a partial solution to address a specific 
> issue.
> We should consider to add the "root cause" for _all_ `TimeoutException` cases 
> and unify/cleanup the code to get an holistic solution to the problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-21 Thread Vinicius Vieira dos Santos (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856887#comment-17856887
 ] 

Vinicius Vieira dos Santos commented on KAFKA-16986:


[~jolshan] No problem! Thanks for the update.

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1, 3.6.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> [https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why]
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> *Producer Config*
>  
>     acks = -1
>     auto.include.jmx.reporter = true
>     batch.size = 16384
>     bootstrap.servers = [server:9092]
>     buffer.memory = 33554432
>     client.dns.lookup = use_all_dns_ips
>     client.id = producer-1
>     compression.type = gzip
>     connections.max.idle.ms = 54
>     delivery.timeout.ms = 3
>     enable.idempotence = true
>     interceptor.classes = []
>     key.serializer = class 
> org.apache.kafka.common.serialization.ByteArraySerializer
>     linger.ms = 0
>     max.block.ms = 6
>     max.in.flight.requests.per.connection = 1
>     max.request.size = 1048576
>     metadata.max.age.ms = 30
>     metadata.max.idle.ms = 30
>     metric.reporters = []
>     metrics.num.samples = 2
>     metrics.recording.level = INFO
>     metrics.sample.window.ms = 3
>     partitioner.adaptive.partitioning.enable = true
>     partitioner.availability.timeout.ms = 0
>     partitioner.class = null
>     partitioner.ignore.keys = false
>     receive.buffer.bytes = 32768
>     reconnect.backoff.max.ms = 1000
>     reconnect.backoff.ms = 50
>     request.timeout.ms = 3
>     retries = 3
>     retry.backoff.ms = 100
>     sasl.client.callback.handler.class = null
>     sasl.jaas.config = [hidden]
>     sasl.kerberos.kinit.cmd = /usr/bin/kinit
>     sasl.kerberos.min.time.before.relogin = 6
>     sasl.kerberos.service.name = null
>     sasl.kerberos.ticket.renew.jitter = 0.05
>     sasl.kerberos.ticket.renew.window.factor = 0.8
>     sasl.login.callback.handler.class = null
>     sasl.login.class = null
>     sasl.login.connect.timeout.ms = null
>     sasl.login.read.timeout.ms = null
>     sasl.login.refresh.buffer.seconds = 300
>     sasl.login.refresh.min.period.seconds = 60
>     sasl.login.refresh.window.factor = 0.8
>     sasl.login.refresh.window.jitter = 0.05
>     sasl.login.retry.backoff.max.ms = 1
>     sasl.login.retry.backoff.ms = 100
>     sasl.mechanism = PLAIN
>     sasl.oauthbearer.clock.skew.seconds = 30
>     sasl.oauthbearer.expected.audience = null
>     sasl.oauthbearer.expected.issuer = null
>     sasl.oauthbearer.jwks.endpoint.refresh.ms = 360
>     sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 1
>     sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
>     sasl.oauthbearer.jwks.endpoint.url = null
>     sasl.oauthbearer.scope.claim.name = scope
>     sasl.oauthbearer.sub.claim.name = sub
>     sasl.oauthbearer.token.endpoint.url = null
>     security.protocol = SASL_PLAINTEXT
>     security.providers = null
>     send.buffer.bytes = 131072
>     socket.connection.setup.timeout.max.ms = 3
>     socket.connection.setup.timeout.ms = 1
>     ssl.cipher.suites = null
>     ssl.enabled.protocols = [TLSv1.2, TLSv1.3

[jira] [Commented] (KAFKA-16986) After upgrading to Kafka 3.4.1, the producer constantly produces logs related to topicId changes

2024-06-21 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856878#comment-17856878
 ] 

Justine Olshan commented on KAFKA-16986:


I haven't forgotten about this! Just working on some blocker bugs these past 
few days. I will try to take a look today and if not, early next week.

> After upgrading to Kafka 3.4.1, the producer constantly produces logs related 
> to topicId changes
> 
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.0.1, 3.6.1
>Reporter: Vinicius Vieira dos Santos
>Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that 
> the applications began to log the message "{*}Resetting the last seen epoch 
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null 
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this 
> behavior is not expected because the topic was not deleted and recreated so 
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means 
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can 
> simply change the log level of the org.apache.kafka.clients.Metadata class to 
> warn without worries
>  
> There are other reports of the same behavior like this:  
> [https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why]
>  
> *Some log occurrences over an interval of about 7 hours, each block refers to 
> an instance of the application in kubernetes*
>  
> !image.png!
> *My scenario:*
> *Application:*
>  - Java: 21
>  - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
>  - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52 
> image
>  
> *Producer Config*
>  
>     acks = -1
>     auto.include.jmx.reporter = true
>     batch.size = 16384
>     bootstrap.servers = [server:9092]
>     buffer.memory = 33554432
>     client.dns.lookup = use_all_dns_ips
>     client.id = producer-1
>     compression.type = gzip
>     connections.max.idle.ms = 54
>     delivery.timeout.ms = 3
>     enable.idempotence = true
>     interceptor.classes = []
>     key.serializer = class 
> org.apache.kafka.common.serialization.ByteArraySerializer
>     linger.ms = 0
>     max.block.ms = 6
>     max.in.flight.requests.per.connection = 1
>     max.request.size = 1048576
>     metadata.max.age.ms = 30
>     metadata.max.idle.ms = 30
>     metric.reporters = []
>     metrics.num.samples = 2
>     metrics.recording.level = INFO
>     metrics.sample.window.ms = 3
>     partitioner.adaptive.partitioning.enable = true
>     partitioner.availability.timeout.ms = 0
>     partitioner.class = null
>     partitioner.ignore.keys = false
>     receive.buffer.bytes = 32768
>     reconnect.backoff.max.ms = 1000
>     reconnect.backoff.ms = 50
>     request.timeout.ms = 3
>     retries = 3
>     retry.backoff.ms = 100
>     sasl.client.callback.handler.class = null
>     sasl.jaas.config = [hidden]
>     sasl.kerberos.kinit.cmd = /usr/bin/kinit
>     sasl.kerberos.min.time.before.relogin = 6
>     sasl.kerberos.service.name = null
>     sasl.kerberos.ticket.renew.jitter = 0.05
>     sasl.kerberos.ticket.renew.window.factor = 0.8
>     sasl.login.callback.handler.class = null
>     sasl.login.class = null
>     sasl.login.connect.timeout.ms = null
>     sasl.login.read.timeout.ms = null
>     sasl.login.refresh.buffer.seconds = 300
>     sasl.login.refresh.min.period.seconds = 60
>     sasl.login.refresh.window.factor = 0.8
>     sasl.login.refresh.window.jitter = 0.05
>     sasl.login.retry.backoff.max.ms = 1
>     sasl.login.retry.backoff.ms = 100
>     sasl.mechanism = PLAIN
>     sasl.oauthbearer.clock.skew.seconds = 30
>     sasl.oauthbearer.expected.audience = null
>     sasl.oauthbearer.expected.issuer = null
>     sasl.oauthbearer.jwks.endpoint.refresh.ms = 360
>     sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 1
>     sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
>     sasl.oauthbearer.jwks.endpoint.url = null
>     sasl.oauthbearer.scope.claim.name = scope
>     sasl.oauthbearer.sub.claim.name = sub
>     sasl.oauthbearer.token.endpoint.url = null
>     security.protocol = SASL_PLAINTEXT
>     security.providers = null
>     send.buffer.bytes = 131072
>     socket.connection.setup.timeout.max.ms = 3
>     socket.connection.setup.timeout.ms = 1

[jira] [Commented] (KAFKA-17011) SupportedFeatures.MinVersion incorrectly blocks v0

2024-06-21 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856877#comment-17856877
 ] 

Justine Olshan commented on KAFKA-17011:


We plan to have two implementations – one for 3.8 and one for going forward. 
Here is the PR to unblock 3.8: https://github.com/apache/kafka/pull/16420

> SupportedFeatures.MinVersion incorrectly blocks v0
> --
>
> Key: KAFKA-17011
> URL: https://issues.apache.org/jira/browse/KAFKA-17011
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.8.0
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Blocker
>
> SupportedFeatures.MinVersion incorrectly blocks v0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16984) New consumer should wait for leave group response to avoid responses to disconnected clients

2024-06-21 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans updated KAFKA-16984:
---
Fix Version/s: 3.9.0

> New consumer should wait for leave group response to avoid responses to 
> disconnected clients
> 
>
> Key: KAFKA-16984
> URL: https://issues.apache.org/jira/browse/KAFKA-16984
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 3.8.0
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848
> Fix For: 3.9.0
>
>
> When the new consumer attempts to leave a group, it sends a leave group 
> request in a fire-and-forget mode, so it transitions to UNSUBSCRIBED as soon 
> as it generates the requests (without waiting for a response. Note that this 
> transition to unsubscribe marks the leave operation as completed. This makes 
> that, when leaving a group while closing a consumer, the member sends the 
> leave request and moves on to next operation, which is closing the network 
> thread, so we end up with disconnected client receiving responses from the 
> server. We should send leave group heartbeat, and transition to UNSUBSCRIBE 
> (completes the leave operation) only when we get a response for it, which is 
> a much more accurate confirmation that the consumer left the group and can 
> move on with other operations  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17019) Producer TimeoutException should include root cause

2024-06-21 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-17019:

Description: 
With KAFKA-16965 we added a "root cause" to some `TimeoutException` thrown by 
the producer. However, it's only a partial solution to address a specific issue.

We should consider to add the "root cause" for _all_ `TimeoutException` cases 
and unify/cleanup the code to get an holistic solution to the problem.

  was:
With KAFKA-16965 we added a "root cause" to some `TimeoutException` throws by 
the producer. However, it's only a partial solution to address a specific issue.

We should consider to add the "root cause" for _all_ `TimeoutException` cases 
and unify/cleanup the code to get an holistic solution to the problem.


> Producer TimeoutException should include root cause
> ---
>
> Key: KAFKA-17019
> URL: https://issues.apache.org/jira/browse/KAFKA-17019
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Matthias J. Sax
>Priority: Major
>
> With KAFKA-16965 we added a "root cause" to some `TimeoutException` thrown by 
> the producer. However, it's only a partial solution to address a specific 
> issue.
> We should consider to add the "root cause" for _all_ `TimeoutException` cases 
> and unify/cleanup the code to get an holistic solution to the problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17019) Producer TimeoutException should include root cause

2024-06-21 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-17019:
---

 Summary: Producer TimeoutException should include root cause
 Key: KAFKA-17019
 URL: https://issues.apache.org/jira/browse/KAFKA-17019
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Reporter: Matthias J. Sax


With KAFKA-16965 we added a "root cause" to some `TimeoutException` throws by 
the producer. However, it's only a partial solution to address a specific issue.

We should consider to add the "root cause" for _all_ `TimeoutException` cases 
and unify/cleanup the code to get an holistic solution to the problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception

2024-06-21 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856841#comment-17856841
 ] 

Matthias J. Sax commented on KAFKA-17015:
-

Why do you care about both, as both are internal classes and not part of the 
public API? – Also, what is the use case to store any of both in an array list 
or similar?

User code should not need to worry about it, and given that its non-public API 
I don't see any issue?

> ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be 
> deprecated and throw an exception
> -
>
> Key: KAFKA-17015
> URL: https://issues.apache.org/jira/browse/KAFKA-17015
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> when review PR#16970。 I find function 
> `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be 
> deprecated because they have a mutable attribute, which will cause the 
> hashCode to change。 
> I don't think that hashCode should be discarded just because it is mutable. 
> HashCode is a very important property of an object. It just shouldn't be used 
> for hash addressing, like ArayList
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16707) Kafka Kraft : adding Principal Type in StandardACL for matching with KafkaPrincipal of connected client in order to defined ACL with a notion of group

2024-06-21 Thread Greg Harris (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856831#comment-17856831
 ] 

Greg Harris commented on KAFKA-16707:
-

Hi [~handfreezer] Thanks for contributing to Kafka!

I'm not very familiar with this area, but it looks like this is adding support 
for new Principal Types? I see some parts of the code that check for "User" and 
have an error if the principal type is anything else. So does this constitute a 
new feature?

If so, a KIP explaining the feature and a mailing list discussion with the 
community would be the next steps for moving it forward: 
[https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]

> Kafka Kraft : adding Principal Type in StandardACL for matching with 
> KafkaPrincipal of connected client in order to defined ACL with a notion of 
> group
> --
>
> Key: KAFKA-16707
> URL: https://issues.apache.org/jira/browse/KAFKA-16707
> Project: Kafka
>  Issue Type: Improvement
>  Components: kraft, security
>Affects Versions: 3.7.0, 3.8.0, 3.7.1
>Reporter: Franck LEDAY
>Assignee: Franck LEDAY
>Priority: Major
>  Labels: KafkaPrincipal, acl, authorization, group, metadata, 
> security, user
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> Default StandardAuthorizer in Kraft mode is defining a KafkaPrincpal as 
> type=User and a name, and a special wildcard eventually.
> The difficulty with this solution is that we can't define ACL by group of 
> KafkaPrincipal.
> There is a way for the moment to do so by defining RULE to rewrite the 
> KafkaPrincipal name field, BUT, to introduce this way the notion of group, 
> you have to set rules which will make you loose the uniq part of the 
> KafkaPrincipal name of the connected client.
> The concept here, in the StandardAuthorizer of Kafka Kraft, is to add  the 
> management of KafkaPrincipal type:
>  * Regex
>  * StartsWith
>  * EndsWith
>  * Contains
>  * (User is still available and keep working as before to avoid any 
> regression/issue with current configurations)
> This would be done in the StandardAcl class of metadata/authorizer, and the 
> findresult method of StandardAuthorizerData will delegate the match to the 
> StandardAcl class (for performance reason, see below explanation).
> By this way, you can still use RULEs to rewrite KafkaPrincipal name of 
> connected client (say you want to transform a DN of SSL certificate : 
> cn=myCN,ou=myOU,c=FR becomes myCN@myOU), and then, you can define a new ACL 
> with principal like: 'Regex:^.*@my[oO]U$' that will match all connected 
> client with a certificate bind to ou=myOU . Note in this particular case, the 
> same can be done with 'EndsWtih:@myOU', and the type 'Contains' can work, but 
> I imagine more the usage of this type for matching in a multigroup definition 
> in a KafkaPrincipal.
>  
> Note about performance reason : for the moment, I have it implemented in a 
> fork of StandardAuthroizer/StandardAuthroizerData/StandardAcl defined by the 
> property authorizer.class.name in a cluster of Kraft with SSL authentication 
> required and tested fine. But, by this way, every time that an ACL is checked 
> against a KafkaPrincipal, I do a strcmp of the KafkaPrincipal type of the ACL 
> to determine the matching method to be done. By implementing it in 
> StandardAcl class, and then delegating the matching from 
> StandardAuthorizerData to the StandardAcl class, this allow to analyse and 
> store the type of the KafkaPrincipal method for matching as an enum, and the 
> KafkaPrincipal name separately in order to avoid redoing the job each time a 
> match has to be checked.
>  
> Here is my status of the implementation:
>  * I have this solution ('performance reason') implemented in fork (then 
> branch) of the 3.7.0 github repo,
>  * I added few unit test, and a gradlew metadata:test is working fine on all 
> tests except one (witch is failing also on branch 3.7.0 without my changes),
>  * I added few lines about in security.html .
>  
> I'm opening the issue to discuss it with you, because I would like to create 
> a PR on Github for next version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s

2024-06-21 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-17013:
--
Component/s: (was: clients)
 (was: consumer)

> RequestManager#ConnectionState#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Bug
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>  Labels: consumer-threading-refactor
>
> RequestManager#ConnectionState#toString() should use %s
> [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s

2024-06-21 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-17013:
--
Labels:   (was: consumer-threading-refactor)

> RequestManager#ConnectionState#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Bug
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> RequestManager#ConnectionState#toString() should use %s
> [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s

2024-06-21 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-17013:
--
Labels: consumer-threading-refactor  (was: )

> RequestManager#ConnectionState#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>  Labels: consumer-threading-refactor
>
> RequestManager#ConnectionState#toString() should use %s
> [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17013) RequestManager#ConnectionState#toString() should use %s

2024-06-21 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-17013:
--
Component/s: clients
 consumer

> RequestManager#ConnectionState#toString() should use %s
> ---
>
> Key: KAFKA-17013
> URL: https://issues.apache.org/jira/browse/KAFKA-17013
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> RequestManager#ConnectionState#toString() should use %s
> [https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/RequestManager.java#L375]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17014) ScramFormatter should not use String for password.

2024-06-21 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856811#comment-17856811
 ] 

Arpit Agarwal commented on KAFKA-17014:
---

Sure [~bmilk], it's currently unassigned.

> ScramFormatter should not use String for password.
> --
>
> Key: KAFKA-17014
> URL: https://issues.apache.org/jira/browse/KAFKA-17014
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Priority: Major
>
> Since String is immutable, there is no easy way to erase a String password 
> after use.  We should not use String for password.  See also  
> https://stackoverflow.com/questions/8881291/why-is-char-preferred-over-string-for-passwords



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17017) AsyncConsumer#unsubscribe does not clean the assigned partitions

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-17017:
---
Labels: kip-848-client-support  (was: )

> AsyncConsumer#unsubscribe does not clean the assigned partitions
> 
>
> Key: KAFKA-17017
> URL: https://issues.apache.org/jira/browse/KAFKA-17017
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: kip-848-client-support
>
> According to docs [0] `Consumer#unsubscribe` should clean both subscribed and 
> assigned partitions. However, there are two issues about `AsyncConsumer`
> 1)  if we don't set group id,  `AsyncConsumer#unsubscribe`[1] will be no-op
> 2)  if we set group id, `AsyncConsumer` is always in `UNSUBSCRIBED` state and 
> so `MembershipManagerImpl#leaveGroup`[2] will be no-op
> [0] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L759
> [1] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L1479
> [2] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java#L666



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Kuan Po Tseng (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuan Po Tseng reassigned KAFKA-16830:
-

Assignee: Kuan Po Tseng  (was: Ksolves)

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Kuan Po Tseng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856809#comment-17856809
 ] 

Kuan Po Tseng commented on KAFKA-16830:
---

Hi [~ksolves.kafka] , yes, I'm working on this one.

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Ksolves
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17018) Metadata version 3.9 should return Fetch version 17

2024-06-21 Thread Jira
José Armando García Sancio created KAFKA-17018:
--

 Summary: Metadata version 3.9 should return Fetch version 17
 Key: KAFKA-17018
 URL: https://issues.apache.org/jira/browse/KAFKA-17018
 Project: Kafka
  Issue Type: Sub-task
Reporter: José Armando García Sancio
 Fix For: 3.9.0


MetadataVersion.fetchRequestVersion() should return 17 for version 3.9



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17017) AsyncConsumer#unsubscribe does not clean the assigned partitions

2024-06-21 Thread PoAn Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856800#comment-17856800
 ] 

PoAn Yang commented on KAFKA-17017:
---

Hi [~chia7712], I'm interested in this issue. If you're not working on it, may 
I take it? Thank you.

> AsyncConsumer#unsubscribe does not clean the assigned partitions
> 
>
> Key: KAFKA-17017
> URL: https://issues.apache.org/jira/browse/KAFKA-17017
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> According to docs [0] `Consumer#unsubscribe` should clean both subscribed and 
> assigned partitions. However, there are two issues about `AsyncConsumer`
> 1)  if we don't set group id,  `AsyncConsumer#unsubscribe`[1] will be no-op
> 2)  if we set group id, `AsyncConsumer` is always in `UNSUBSCRIBED` state and 
> so `MembershipManagerImpl#leaveGroup`[2] will be no-op
> [0] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L759
> [1] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L1479
> [2] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java#L666



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17017) AsyncConsumer#unsubscribe does not clean the assigned partitions

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-17017:
---
Description: 
According to docs [0] `Consumer#unsubscribe` should clean both subscribed and 
assigned partitions. However, there are two issues about `AsyncConsumer`

1)  if we don't set group id,  `AsyncConsumer#unsubscribe`[1] will be no-op
2)  if we set group id, `AsyncConsumer` is always in `UNSUBSCRIBED` state and 
so `MembershipManagerImpl#leaveGroup`[2] will be no-op

[0] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L759
[1] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L1479
[2] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java#L666

  was:
According to docs [0] `Consumer#unsubscribe` should clean both subscribed and 
assigned partitions. However, there are two issues about `AsyncConsumer`

1)  if we don't set group id,  `AsyncConsumer#unsubscribe`[1] will be no-op
2)  if we set group id, `AsyncConsumer` is always in `UNSUBSCRIBED` state and 
so `MembershipManagerImpl#leaveGroup` will be no-op

[0] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L759
[1] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L1479
[2] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java#L666


> AsyncConsumer#unsubscribe does not clean the assigned partitions
> 
>
> Key: KAFKA-17017
> URL: https://issues.apache.org/jira/browse/KAFKA-17017
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> According to docs [0] `Consumer#unsubscribe` should clean both subscribed and 
> assigned partitions. However, there are two issues about `AsyncConsumer`
> 1)  if we don't set group id,  `AsyncConsumer#unsubscribe`[1] will be no-op
> 2)  if we set group id, `AsyncConsumer` is always in `UNSUBSCRIBED` state and 
> so `MembershipManagerImpl#leaveGroup`[2] will be no-op
> [0] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L759
> [1] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L1479
> [2] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java#L666



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17017) AsyncConsumer#unsubscribe does not clean the assigned partitions

2024-06-21 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17017:
--

 Summary: AsyncConsumer#unsubscribe does not clean the assigned 
partitions
 Key: KAFKA-17017
 URL: https://issues.apache.org/jira/browse/KAFKA-17017
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


According to docs [0] `Consumer#unsubscribe` should clean both subscribed and 
assigned partitions. However, there are two issues about `AsyncConsumer`

1)  if we don't set group id,  `AsyncConsumer#unsubscribe`[1] will be no-op
2)  if we set group id, `AsyncConsumer` is always in `UNSUBSCRIBED` state and 
so `MembershipManagerImpl#leaveGroup` will be no-op

[0] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L759
[1] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L1479
[2] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java#L666



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16965: Throw cause of TimeoutException [kafka]

2024-06-21 Thread via GitHub


artemlivshits commented on code in PR #16344:
URL: https://github.com/apache/kafka/pull/16344#discussion_r1648464072


##
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java:
##
@@ -1176,18 +1176,25 @@ private ClusterAndWaitTime waitOnMetadata(String topic, 
Integer partition, long
 metadata.awaitUpdate(version, remainingWaitMs);
 } catch (TimeoutException ex) {
 // Rethrow with original maxWaitMs to prevent logging 
exception with remainingWaitMs
-throw new TimeoutException(
-String.format("Topic %s not present in metadata after 
%d ms.",
-topic, maxWaitMs));
+final String errorMessage = String.format("Topic %s not 
present in metadata after %d ms.",
+topic, maxWaitMs);
+if (metadata.getError(topic) != null) {

Review Comment:
   I have a couple questions:
   1. Looks like we're just changing the TimeoutException handling on this 
codepath, but wouldn't we want to change this across the board, i.e. whenever 
we generate TimeoutException we actually add the last retriable error (unless 
it's a true timeout and we simply didn't get a reply)?
   2. Rather than do this error state tunnelling across multiple levels, 
wouldn't it be simpler to just add the root cause whenever we decide to 
initially throw TimeoutException (i.e. when we get a retriable error but throw 
TimeoutException instead)?
   
   Also I'm not sure if we get an error in the metadata request if the topic is 
just missing -- looks the metadata requests would be successful, just not 
contain the topic.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-16755: Implement lock timeout functionality in SharePartition [kafka]

2024-06-21 Thread via GitHub


adixitconfluent opened a new pull request, #16414:
URL: https://github.com/apache/kafka/pull/16414

   ### About
   Implemented acquisition lock timeout functionality in SharePartition. 
Implemented the following functions - 
   1. `releaseAcquisitionLockOnTimeout` - This function is executed when the 
acquisition lock timeout is reached. The function releases the acquired records.
   2. `releaseAcquisitionLockOnTimeoutForCompleteBatch` - Function which 
releases acquired records maintained at a batch level.
   3. `releaseAcquisitionLockOnTimeoutForPerOffsetBatch` - Function which 
releases acquired records maintained at an offset level.
   
   ### Testing
   Added unit tests to cover the new functionality added.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Revert "KAFKA-16154: Broker returns offset for LATEST_TIERED_TIMESTAM… [kafka]

2024-06-21 Thread via GitHub


jlprat commented on code in PR #16400:
URL: https://github.com/apache/kafka/pull/16400#discussion_r1648468712


##
server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java:
##
@@ -231,7 +231,7 @@ public enum MetadataVersion {
  * Think carefully before you update this value. ONCE A METADATA 
VERSION IS PRODUCTION,
  * IT CANNOT BE CHANGED.
  */
-public static final MetadataVersion LATEST_PRODUCTION = IBP_3_7_IV4;
+public static final MetadataVersion LATEST_PRODUCTION = IBP_3_8_IV0;

Review Comment:
   Applied feedback, except for "Should we bring IBP_3_9_IV0, the new MV 
associated with ELR to the 3.8 branch or change all those tests related to 
ELR?". I'm waiting for @cmccabe's feedback on this one.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15623, Migrate test of Stream module to Junit5 (Stream state) [kafka]

2024-06-21 Thread via GitHub


m1a2st commented on code in PR #16356:
URL: https://github.com/apache/kafka/pull/16356#discussion_r1647508440


##
streams/src/test/java/org/apache/kafka/streams/state/internals/AbstractRocksDBSessionStoreTest.java:
##
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+public abstract class AbstractRocksDBSessionStoreTest extends 
AbstractSessionBytesStoreTest {
+
+@Override
+String getStoreName() {
+return "rocksDB session store";
+}
+
+abstract StoreType storeType();

Review Comment:
   @chia7712, Thanks for your comment, I think it' a good idea.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15623, Migrate test of Stream module to Junit5 (Stream state) [kafka]

2024-06-21 Thread via GitHub


m1a2st commented on PR #16356:
URL: https://github.com/apache/kafka/pull/16356#issuecomment-2180588283

   @chia7712, Thanks for your comment, PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-21 Thread Arushi Helms (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856788#comment-17856788
 ] 

Arushi Helms commented on KAFKA-17008:
--

Thanks for the update. 

> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944
> 
>
> Key: KAFKA-17008
> URL: https://issues.apache.org/jira/browse/KAFKA-17008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arushi Helms
>Priority: Major
>
> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
> I could not find an existing ticket for this, if there one then please mark 
> this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15623: Migrate streams tests (processor) module to JUnit 5 [kafka]

2024-06-21 Thread via GitHub


frankvicky commented on code in PR #16396:
URL: https://github.com/apache/kafka/pull/16396#discussion_r1647505748


##
streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/LegacyStickyTaskAssignorTest.java:
##
@@ -100,79 +99,71 @@
 import static org.hamcrest.Matchers.is;
 import static org.hamcrest.Matchers.lessThanOrEqualTo;
 import static org.hamcrest.Matchers.not;
-import static org.junit.Assert.assertFalse;
+import static org.junit.jupiter.api.Assertions.assertFalse;
 import static org.mockito.Mockito.spy;
 
-@RunWith(Parameterized.class)
 public class LegacyStickyTaskAssignorTest {
 
 private final List expectedTopicGroupIds = asList(1, 2);
 private final Time time = new MockTime();
 private final Map clients = new TreeMap<>();
 
-private boolean enableRackAwareTaskAssignor;
-
-@Parameter
-public String rackAwareStrategy;
-
-@Before
-public void setUp() {
-enableRackAwareTaskAssignor = 
!rackAwareStrategy.equals(StreamsConfig.RACK_AWARE_ASSIGNMENT_STRATEGY_NONE);
-}
-
-@Parameterized.Parameters(name = "rackAwareStrategy={0}")
-public static Collection getParamStoreType() {
-return asList(new Object[][] {
-{StreamsConfig.RACK_AWARE_ASSIGNMENT_STRATEGY_NONE},
-{StreamsConfig.RACK_AWARE_ASSIGNMENT_STRATEGY_MIN_TRAFFIC},
-{StreamsConfig.RACK_AWARE_ASSIGNMENT_STRATEGY_BALANCE_SUBTOPOLOGY},
-});
+static Stream paramStoreType() {

Review Comment:
   Hmmm, maybe not.
   
   The purpose of this PR is to make minimal changes to migrate to JUnit 5. 
Therefore, each method should manually call the setup method and pass 
`rackAwareStrategy` as an argument to determine the value of 
`enableRackAwareTaskAssignor`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16811 Sliding window approach to calculate non-zero punctuate-ratio metric [kafka]

2024-06-21 Thread via GitHub


mjsax commented on code in PR #16162:
URL: https://github.com/apache/kafka/pull/16162#discussion_r1645260947


##
streams/src/main/java/org/apache/kafka/streams/processor/internals/PunctuateRatioSlidingWindow.java:
##
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.internals;
+
+import org.apache.kafka.common.utils.Time;
+
+import java.util.LinkedList;
+import java.util.Queue;
+
+public class PunctuateRatioSlidingWindow {
+private final Queue ratioQueue;
+private final long windowSizeMillis;
+private final Time time;
+
+public PunctuateRatioSlidingWindow(long windowSizeMillis, Time time) {
+this.windowSizeMillis = windowSizeMillis;
+this.ratioQueue = new LinkedList<>();
+this.time=time;
+}
+
+public void update(double ratio){
+long currentTimeMillis = time.milliseconds();
+ratioQueue.offer(new RatioTimeStamp(ratio, currentTimeMillis));;
+pruneQueue(currentTimeMillis);
+}
+
+private void pruneQueue(long currentTimeMillis) {
+while(!ratioQueue.isEmpty()){
+RatioTimeStamp oldest = ratioQueue.peek();
+if(currentTimeMillis - oldest.getTimestamp() > windowSizeMillis) {
+ratioQueue.poll();
+} else {
+break;
+}
+}
+}
+
+public double getAverageRatio() {
+return ratioQueue.stream()
+.mapToDouble(RatioTimeStamp::getRatio)
+.average()

Review Comment:
   Seems we are computing an average over a ratio. Is this mathematically 
sound? I believe not.
   
   When we record punctuation ratio, the "time frame" over with the ration is 
computed is not guaranteed to be of a fixed size. Thus, the different ratios 
would need to be weighted differently for a correct computation?
   
   Seem, instead of keeping window of ratio samples, we should rather keep the 
raw latency and runtime values, ie, a window of pairs `totalPunctuateLatency` 
and `runOnceLatency `, including a pre-computed `sumTotalPunctuateLatency` and 
`sumRunOnceLatency` and compute this result as `sumTotalPunctuateLatency / 
sumRunOnceLatency` ?
   
   When updating this queue, we can also update both running sums by adding new 
and removing old values.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14405: Log a warning when users attempt to set a config controlled by Streams [kafka]

2024-06-21 Thread via GitHub


mjsax commented on code in PR #12988:
URL: https://github.com/apache/kafka/pull/12988#discussion_r1645262881


##
streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java:
##
@@ -999,6 +988,10 @@ public class StreamsConfig extends AbstractConfig {
 (name, value) -> 
verifyTopologyOptimizationConfigs((String) value),
 Importance.MEDIUM,
 TOPOLOGY_OPTIMIZATION_DOC)
+.define(ProducerConfig.PARTITIONER_CLASS_CONFIG,

Review Comment:
   Why do we add this here? It's not a StreamsConfig, and seems it should be 
added?



##
streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java:
##
@@ -1182,41 +1175,64 @@ public class StreamsConfig extends AbstractConfig {
 WINDOW_SIZE_MS_DOC);
 }
 
-// this is the list of configs for underlying clients
-// that streams prefer different default values
-private static final Map PRODUCER_DEFAULT_OVERRIDES;
+// KS_DEFAULT_PRODUCER_CONFIGS - default producer configs for Kafka Streams

Review Comment:
   ```suggestion
   // default producer configs for Kafka Streams
   ```



##
streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java:
##
@@ -1182,41 +1175,64 @@ public class StreamsConfig extends AbstractConfig {
 WINDOW_SIZE_MS_DOC);
 }
 
-// this is the list of configs for underlying clients
-// that streams prefer different default values
-private static final Map PRODUCER_DEFAULT_OVERRIDES;
+// KS_DEFAULT_PRODUCER_CONFIGS - default producer configs for Kafka Streams
+private static final Map KS_DEFAULT_PRODUCER_CONFIGS;
 static {
 final Map tempProducerDefaultOverrides = new 
HashMap<>();
 tempProducerDefaultOverrides.put(ProducerConfig.LINGER_MS_CONFIG, 
"100");
-PRODUCER_DEFAULT_OVERRIDES = 
Collections.unmodifiableMap(tempProducerDefaultOverrides);
+
+KS_DEFAULT_PRODUCER_CONFIGS = 
Collections.unmodifiableMap(tempProducerDefaultOverrides);
 }
 
-private static final Map PRODUCER_EOS_OVERRIDES;
+// KS_DEFAULT_PRODUCER_CONFIGS_EOS_ENABLED - default producer configs for 
Kafka Streams with EOS enabled
+private static final Map 
KS_DEFAULT_PRODUCER_CONFIGS_EOS_ENABLED;
 static {
-final Map tempProducerDefaultOverrides = new 
HashMap<>(PRODUCER_DEFAULT_OVERRIDES);
+final Map tempProducerDefaultOverrides = new 
HashMap<>(KS_DEFAULT_PRODUCER_CONFIGS);
 
tempProducerDefaultOverrides.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, 
Integer.MAX_VALUE);
-
tempProducerDefaultOverrides.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, 
true);
-// Reduce the transaction timeout for quicker pending offset 
expiration on broker side.
 
tempProducerDefaultOverrides.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 
DEFAULT_TRANSACTION_TIMEOUT);
 
-PRODUCER_EOS_OVERRIDES = 
Collections.unmodifiableMap(tempProducerDefaultOverrides);
+KS_DEFAULT_PRODUCER_CONFIGS_EOS_ENABLED = 
Collections.unmodifiableMap(tempProducerDefaultOverrides);
+}
+
+// KS_CONTROLLED_PRODUCER_CONFIGS_EOS_ENABLED - Kafka Streams producer 
configs that cannot be overridden by the user with EOS enabled
+private static final Map 
KS_CONTROLLED_PRODUCER_CONFIGS_EOS_ENABLED;
+static {
+final Map tempProducerDefaultOverrides = new 
HashMap<>();
+
tempProducerDefaultOverrides.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, 
true);
+
tempProducerDefaultOverrides.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, null);
+
+KS_CONTROLLED_PRODUCER_CONFIGS_EOS_ENABLED = 
Collections.unmodifiableMap(tempProducerDefaultOverrides);
 }
 
-private static final Map CONSUMER_DEFAULT_OVERRIDES;
+// KS_DEFAULT_CONSUMER_CONFIGS - default consumer configs for Kafka Streams
+private static final Map KS_DEFAULT_CONSUMER_CONFIGS;
 static {
 final Map tempConsumerDefaultOverrides = new 
HashMap<>();
 
tempConsumerDefaultOverrides.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 
"1000");
 
tempConsumerDefaultOverrides.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, 
"earliest");
-
tempConsumerDefaultOverrides.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, 
"false");
 tempConsumerDefaultOverrides.put("internal.leave.group.on.close", 
false);
-CONSUMER_DEFAULT_OVERRIDES = 
Collections.unmodifiableMap(tempConsumerDefaultOverrides);
+
+KS_DEFAULT_CONSUMER_CONFIGS = 
Collections.unmodifiableMap(tempConsumerDefaultOverrides);
 }
 
-private static final Map CONSUMER_EOS_OVERRIDES;
+// KS_CONTROLLED_CONSUMER_CONFIGS - Kafka Streams consumer configs that 
cannot be overridden by the user

Review Comment:
   ```suggestion
   // Kafka Streams consumer configs that cannot be overridden by the user
   ```



##
streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java:
##
@@ 

Re: [PR] KAFKA-15259: Processing must continue with flush + commitTnx [kafka]

2024-06-21 Thread via GitHub


artemlivshits commented on code in PR #16332:
URL: https://github.com/apache/kafka/pull/16332#discussion_r1645285270


##
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java:
##
@@ -1257,6 +1261,9 @@ public void flush() {
 this.sender.wakeup();
 try {
 this.accumulator.awaitFlushCompletion();
+if (transactionManager != null) {
+transactionManager.maybeClearLastError();

Review Comment:
   To make it harder to write buggy code by keeping semantics of similar 
constructs to be similar.  `send + flush + commit`  looks like just a more 
verbose form for `send + commit` , the latter works without handling errors in 
send (correct basic program with minimal effort), the former would swallow the 
error.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-10787: Apply spotless to `clients` [kafka]

2024-06-21 Thread via GitHub


gongxuanzhang opened a new pull request, #16393:
URL: https://github.com/apache/kafka/pull/16393

   This PR is sub PR from https://github.com/apache/kafka/pull/16097.
   It is part of a series of changes to progressively apply [spotless 
plugin(import-order)] across all modules. In this step, the plugin is activated 
in the 
   * stream:test-utils
   * streams:upgrade-system-tests-0100
   
   ## Module and history 
   
   > Please see the table below for the historical changes related to applying 
the Spotless plugin
   
   | module| apply |  related PR  |
   | -- | --- | --- |
   | :clients | ❌| future |
   | :connect:api | ✅| https://github.com/apache/kafka/pull/16299 |
   | :connect:basic-auth-extension | ✅| 
https://github.com/apache/kafka/pull/16299 |
   | :connect:file | ✅| https://github.com/apache/kafka/pull/16299 |
   | :connect:json | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:mirror | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:mirror-client | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:runtime | ❌| future |
   | :connect:test-plugins | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:transforms | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :core | ✅| https://github.com/apache/kafka/pull/16392 |
   | :examples |  ✅| https://github.com/apache/kafka/pull/16296 |
   | :generator |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :group-coordinator:group-coordinator-api | ✅   | 
https://github.com/apache/kafka/pull/16298 |
   | :group-coordinator | ✅  | https://github.com/apache/kafka/pull/16298 |
   | :jmh-benchmarks |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :log4j-appender |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :metadata | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :server | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :shell |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :storage | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :storage:storage-api | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :streams | ❌ | future |
   | :streams:examples |  ✅   | https://github.com/apache/kafka/pull/16378 |
   | :streams:streams-scala | ✅| https://github.com/apache/kafka/pull/16378 
|
   | :streams:test-utils |  ✅ | https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0100 |  ✅ | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0101 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0102 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0110 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-10 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-11 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-20 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-21 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-22 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-23 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-24 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-25 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-26 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-27 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-28 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-30 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-31 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-32 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-33 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-34 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-35 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-36 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-37 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :trogdor |  ✅   | https://github.com/apache/kafka/pull/16296 |
   | :raft |  ✅| https://github.com/apache/kafka/pull/16278 |
   | :server-common | ✅| https://github.com/apache/kafka/pull/16172 |
   | :transaction-coordinator | ✅| 
https://github.com/apache/kafka/pull/16172 |
   | :tools |  ✅   |  https://github.com/apache/kafka/pull/16262  |
   | 

Re: [PR] KAFKA-16957: Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to work with CLASSIC and CONSUMER [kafka]

2024-06-21 Thread via GitHub


chia7712 merged PR #16370:
URL: https://github.com/apache/kafka/pull/16370


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-10787: Apply spotless to `core` [kafka]

2024-06-21 Thread via GitHub


gongxuanzhang opened a new pull request, #16392:
URL: https://github.com/apache/kafka/pull/16392

   This PR is sub PR from https://github.com/apache/kafka/pull/16097.
   It is part of a series of changes to progressively apply [spotless 
plugin(import-order)] across all modules. In this step, the plugin is activated 
in the 
   * stream:test-utils
   * streams:upgrade-system-tests-0100
   
   ## Module and history 
   
   > Please see the table below for the historical changes related to applying 
the Spotless plugin
   
   | module| apply |  related PR  |
   | -- | --- | --- |
   | :clients | ❌| future |
   | :connect:api | ✅| https://github.com/apache/kafka/pull/16299 |
   | :connect:basic-auth-extension | ✅| 
https://github.com/apache/kafka/pull/16299 |
   | :connect:file | ✅| https://github.com/apache/kafka/pull/16299 |
   | :connect:json | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:mirror | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:mirror-client | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:runtime | ❌| future |
   | :connect:test-plugins | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :connect:transforms | ✅   | https://github.com/apache/kafka/pull/16299 |
   | :core | ✅| future |
   | :examples |  ✅| https://github.com/apache/kafka/pull/16296 |
   | :generator |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :group-coordinator:group-coordinator-api | ✅   | 
https://github.com/apache/kafka/pull/16298 |
   | :group-coordinator | ✅  | https://github.com/apache/kafka/pull/16298 |
   | :jmh-benchmarks |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :log4j-appender |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :metadata | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :server | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :shell |  ✅ | https://github.com/apache/kafka/pull/16296 |
   | :storage | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :storage:storage-api | ✅ | https://github.com/apache/kafka/pull/16297 |
   | :streams | ❌ | future |
   | :streams:examples |  ✅   | https://github.com/apache/kafka/pull/16378 |
   | :streams:streams-scala | ✅| https://github.com/apache/kafka/pull/16378 
|
   | :streams:test-utils |  ✅ | https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0100 |  ✅ | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0101 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0102 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-0110 | ✅| 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-10 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-11 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-20 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-21 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-22 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-23 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-24 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-25 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-26 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-27 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-28 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-30 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-31 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-32 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-33 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-34 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-35 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-36 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :streams:upgrade-system-tests-37 | ✅   | 
https://github.com/apache/kafka/pull/16357 |
   | :trogdor |  ✅   | https://github.com/apache/kafka/pull/16296 |
   | :raft |  ✅| https://github.com/apache/kafka/pull/16278 |
   | :server-common | ✅| https://github.com/apache/kafka/pull/16172 |
   | :transaction-coordinator | ✅| 
https://github.com/apache/kafka/pull/16172 |
   | :tools |  ✅   |  https://github.com/apache/kafka/pull/16262  |
   | :tools:tools-api | ✅   | https://git

Re: [PR] KAFKA-16989: Use StringBuilder instead of String concatenation [kafka]

2024-06-21 Thread via GitHub


chia7712 merged PR #16385:
URL: https://github.com/apache/kafka/pull/16385


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16811 Sliding window approach to calculate non-zero punctuate-ratio metric [kafka]

2024-06-21 Thread via GitHub


mjsax commented on PR #16162:
URL: https://github.com/apache/kafka/pull/16162#issuecomment-2177326618

   Do we need to also update/refine the description of the metric in 
`docs/ops.html`? -- Even wondering if this might need a KIP (strictly speaking) 
as we change the semantics of the metric? \cc @cadonna WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15302: Stale value returned when using store.all() with key deletion [docs] [kafka]

2024-06-21 Thread via GitHub


mjsax commented on PR #15495:
URL: https://github.com/apache/kafka/pull/15495#issuecomment-2177315273

   Thanks for the PR @jinyongchoi -- and sorry for the delay -- seems the PR 
slipped through the cracks...
   
   Merged to `trunk` and cherry-picked to `3.8` branch (just in time for 3.8 
release :))


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15302: Stale value returned when using store.all() with key deletion [docs] [kafka]

2024-06-21 Thread via GitHub


mjsax merged PR #15495:
URL: https://github.com/apache/kafka/pull/15495


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: use 2 logdirs in ZK migration system tests [kafka]

2024-06-21 Thread via GitHub


showuon commented on PR #15394:
URL: https://github.com/apache/kafka/pull/15394#issuecomment-2177279854

   Sorry for being late. @soarez , should we back port to v3.8 branch since it 
should work in v3.8.0?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16000 Migrated MembershipManagerImplTest away from ConsumerTestBuilder [kafka]

2024-06-21 Thread via GitHub


brenden20 commented on PR #16312:
URL: https://github.com/apache/kafka/pull/16312#issuecomment-2177253091

   @lianetm thank you for the feedback! I have implemented all suggestions, it 
is looking really good now I think!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16000 Migrated MembershipManagerImplTest away from ConsumerTestBuilder [kafka]

2024-06-21 Thread via GitHub


brenden20 commented on code in PR #16312:
URL: https://github.com/apache/kafka/pull/16312#discussion_r1645190334


##
clients/src/test/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImplTest.java:
##
@@ -366,12 +384,12 @@ public void testFencingWhenStateIsPrepareLeaving() {
 completeCallback(callbackEvent, membershipManager);
 assertEquals(MemberState.UNSUBSCRIBED, membershipManager.state());
 assertEquals(ConsumerGroupHeartbeatRequest.LEAVE_GROUP_MEMBER_EPOCH, 
membershipManager.memberEpoch());
-verify(membershipManager).notifyEpochChange(Optional.empty(), 
Optional.empty());

Review Comment:
   Makes sense, I have reverted that change



##
clients/src/test/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImplTest.java:
##
@@ -386,15 +404,16 @@ public void 
testNewAssignmentIgnoredWhenStateIsPrepareLeaving() {
 receiveAssignment(topicId, Arrays.asList(0, 1), membershipManager);
 assertEquals(MemberState.PREPARE_LEAVING, membershipManager.state());
 assertTrue(membershipManager.topicsAwaitingReconciliation().isEmpty());
-verify(membershipManager, never()).markReconciliationInProgress();
 
 // When callback completes member should transition to LEAVING.
 completeCallback(callbackEvent, membershipManager);
+membershipManager.transitionToSendingLeaveGroup(false);

Review Comment:
   Removed now



##
clients/src/test/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImplTest.java:
##
@@ -2457,10 +2538,7 @@ private CompletableFuture 
mockRevocationNoCallbacks(boolean withAutoCommit
 doNothing().when(subscriptionState).markPendingRevocation(anySet());
 
when(subscriptionState.rebalanceListener()).thenReturn(Optional.empty()).thenReturn(Optional.empty());
 if (withAutoCommit) {
-when(commitRequestManager.autoCommitEnabled()).thenReturn(true);
-CompletableFuture commitResult = new CompletableFuture<>();
-
when(commitRequestManager.maybeAutoCommitSyncBeforeRevocation(anyLong())).thenReturn(commitResult);
-return commitResult;

Review Comment:
   I have reverted changes here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-17012) Enable testMeasureCommitSyncDuration, testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, testMeasureCommittedDuration, testOffsetsForTimesTimeout, testBeg

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17012.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Enable testMeasureCommitSyncDuration, testMeasureCommittedDurationOnFailure, 
> testInvalidGroupMetadata, testMeasureCommittedDuration, 
> testOffsetsForTimesTimeout, testBeginningOffsetsTimeout and 
> testEndOffsetsTimeout for AsyncConsumer
> 
>
> Key: KAFKA-17012
> URL: https://issues.apache.org/jira/browse/KAFKA-17012
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: xuanzhang gong
>Priority: Minor
> Fix For: 3.9.0
>
>
> just test my fingers - it seems "testMeasureCommitSyncDuration, 
> testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, 
> testMeasureCommittedDuration, testOffsetsForTimesTimeout, 
> testBeginningOffsetsTimeout, testEndOffsetsTimeout" can work with 
> AsyncConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17007) Fix SourceAndTarget#equal

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17007.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Fix SourceAndTarget#equal
> -
>
> Key: KAFKA-17007
> URL: https://issues.apache.org/jira/browse/KAFKA-17007
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Minor
> Fix For: 3.9.0
>
>
> In reviewing https://github.com/apache/kafka/pull/16404 I noticed that 
> SourceAndTarget is a part of public class. Hence, we should fix the `equal` 
> that it does not check the class type [0].
> [0] 
> https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/SourceAndTarget.java#L49



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception

2024-06-21 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856752#comment-17856752
 ] 

Chia-Ping Tsai commented on KAFKA-17015:


Normally, we should implements both `equal` and `hashcode`. However, the 
`equal` is used for testing only, and so we don't expect that hashcode should 
gets used.

 

 

> ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be 
> deprecated and throw an exception
> -
>
> Key: KAFKA-17015
> URL: https://issues.apache.org/jira/browse/KAFKA-17015
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> when review PR#16970。 I find function 
> `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be 
> deprecated because they have a mutable attribute, which will cause the 
> hashCode to change。 
> I don't think that hashCode should be discarded just because it is mutable. 
> HashCode is a very important property of an object. It just shouldn't be used 
> for hash addressing, like ArayList
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16900) kafka-producer-perf-test reports error when using transaction.

2024-06-21 Thread Kuan Po Tseng (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuan Po Tseng reassigned KAFKA-16900:
-

Assignee: Kuan Po Tseng

> kafka-producer-perf-test reports error when using transaction.
> --
>
> Key: KAFKA-16900
> URL: https://issues.apache.org/jira/browse/KAFKA-16900
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Chen He
>Assignee: Kuan Po Tseng
>Priority: Minor
>  Labels: perf-test
>
> [https://lists.apache.org/thread/dmrbx8kzv2w5t1v0xjvyjbp5y23omlq8]
> encounter the same issue as mentioned above. 
> Did not found the 2.13 version in affects versions so mark it as the most 
> latest it provided. 2.9. Please feel free to change if possible. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16900) kafka-producer-perf-test reports error when using transaction.

2024-06-21 Thread Kuan Po Tseng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856750#comment-17856750
 ] 

Kuan Po Tseng commented on KAFKA-16900:
---

Agree with [~chia7712] , the behavior of transactionsEnabled is weird... I'll 
start fix this, thanks

> kafka-producer-perf-test reports error when using transaction.
> --
>
> Key: KAFKA-16900
> URL: https://issues.apache.org/jira/browse/KAFKA-16900
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Chen He
>Priority: Minor
>  Labels: perf-test
>
> [https://lists.apache.org/thread/dmrbx8kzv2w5t1v0xjvyjbp5y23omlq8]
> encounter the same issue as mentioned above. 
> Did not found the 2.13 version in affects versions so mark it as the most 
> latest it provided. 2.9. Please feel free to change if possible. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856748#comment-17856748
 ] 

Chia-Ping Tsai commented on KAFKA-16830:


[~ksolves.kafka]  this issue should be shipped to 4.0, so all we can do is to 
wait :)

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Ksolves
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16855) KRaft - Wire replaying a TopicRecord

2024-06-21 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856741#comment-17856741
 ] 

Luke Chen commented on KAFKA-16855:
---

Great! Thanks!

> KRaft - Wire replaying a TopicRecord
> 
>
> Key: KAFKA-16855
> URL: https://issues.apache.org/jira/browse/KAFKA-16855
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Muralidhar Basani
>Priority: Major
>
> *Summary*
> Replaying a TopicRecord containing a new TieredEpoch and TieredState needs to 
> interact with the two thread pools in the RemoteLogManager to add/remove the 
> correct tasks from each



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17016) Align the behavior of GaugeWrapper and MeterWrapper

2024-06-21 Thread PoAn Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856740#comment-17856740
 ] 

PoAn Yang commented on KAFKA-17016:
---

Hi [~chia7712], I'm interested in this issue. May I take it? Thank you.

> Align the behavior of GaugeWrapper and MeterWrapper
> ---
>
> Key: KAFKA-17016
> URL: https://issues.apache.org/jira/browse/KAFKA-17016
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> MeterWrapper [0] can auto-recreate the removed metrics, but GaugeWrapper [1] 
> can't. We should align the behavior in order to avoid potential bugs.
> [0] 
> https://github.com/apache/kafka/blob/9b5b434e2a6b2d5290ea403fc02859b1c523d8aa/core/src/main/scala/kafka/server/KafkaRequestHandler.scala#L261
> [1] 
> https://github.com/apache/kafka/blob/9b5b434e2a6b2d5290ea403fc02859b1c523d8aa/core/src/main/scala/kafka/server/KafkaRequestHandler.scala#L286



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17016) Align the behavior of GaugeWrapper and MeterWrapper

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-17016:
--

Assignee: PoAn Yang  (was: Chia-Ping Tsai)

> Align the behavior of GaugeWrapper and MeterWrapper
> ---
>
> Key: KAFKA-17016
> URL: https://issues.apache.org/jira/browse/KAFKA-17016
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Minor
>
> MeterWrapper [0] can auto-recreate the removed metrics, but GaugeWrapper [1] 
> can't. We should align the behavior in order to avoid potential bugs.
> [0] 
> https://github.com/apache/kafka/blob/9b5b434e2a6b2d5290ea403fc02859b1c523d8aa/core/src/main/scala/kafka/server/KafkaRequestHandler.scala#L261
> [1] 
> https://github.com/apache/kafka/blob/9b5b434e2a6b2d5290ea403fc02859b1c523d8aa/core/src/main/scala/kafka/server/KafkaRequestHandler.scala#L286



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17016) Align the behavior of GaugeWrapper and MeterWrapper

2024-06-21 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17016:
--

 Summary: Align the behavior of GaugeWrapper and MeterWrapper
 Key: KAFKA-17016
 URL: https://issues.apache.org/jira/browse/KAFKA-17016
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


MeterWrapper [0] can auto-recreate the removed metrics, but GaugeWrapper [1] 
can't. We should align the behavior in order to avoid potential bugs.


[0] 
https://github.com/apache/kafka/blob/9b5b434e2a6b2d5290ea403fc02859b1c523d8aa/core/src/main/scala/kafka/server/KafkaRequestHandler.scala#L261
[1] 
https://github.com/apache/kafka/blob/9b5b434e2a6b2d5290ea403fc02859b1c523d8aa/core/src/main/scala/kafka/server/KafkaRequestHandler.scala#L286



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Ksolves (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856737#comment-17856737
 ] 

Ksolves commented on KAFKA-16830:
-

[~brandboat] I didn't check that it was assigned to you. Also, I have assigned 
this to myself. Let me know if you're working on this.

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Ksolves
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Ksolves (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856736#comment-17856736
 ] 

Ksolves commented on KAFKA-16830:
-

Thanks [~chia7712] and [~brandboat] 

Will review and start working on this.

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Ksolves (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ksolves reassigned KAFKA-16830:
---

Assignee: Ksolves  (was: Kuan Po Tseng)

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Ksolves
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Kuan Po Tseng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856734#comment-17856734
 ] 

Kuan Po Tseng commented on KAFKA-16830:
---

hi [~ksolves.kafka] , this Jira is a follow up of 
https://issues.apache.org/jira/browse/KAFKA-16795, and we are going to remove 
[https://github.com/apache/kafka/blob/9b5b434e2a6b2d5290ea403fc02859b1c523d8aa/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353]
this line in 4.0.0

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-16830:
---
Description: 
https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353

 
{code:java}
private static String convertDeprecatedClass(String className) {
switch (className) {
case "kafka.tools.DefaultMessageFormatter":
System.err.println("WARNING: 
kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
next major release. " +
"Please use 
org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
return DefaultMessageFormatter.class.getName();
case "kafka.tools.LoggingMessageFormatter":
System.err.println("WARNING: 
kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
next major release. " +
"Please use 
org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
return LoggingMessageFormatter.class.getName();
case "kafka.tools.NoOpMessageFormatter":
System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
is deprecated and will be removed in the next major release. " +
"Please use 
org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
return NoOpMessageFormatter.class.getName();
default:
return className;
}
}

{code}


Those deprecated formatters "strings" should be removed from 4.0.0



  was:
https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353

 

Those deprecated formatters "strings" should be removed from 4.0.0


> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856732#comment-17856732
 ] 

Chia-Ping Tsai commented on KAFKA-16830:


{quote}
Can you update the GitHub link, please?
{quote}
done!

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> {code:java}
> private static String convertDeprecatedClass(String className) {
> switch (className) {
> case "kafka.tools.DefaultMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.DefaultMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.DefaultMessageFormatter instead");
> return DefaultMessageFormatter.class.getName();
> case "kafka.tools.LoggingMessageFormatter":
> System.err.println("WARNING: 
> kafka.tools.LoggingMessageFormatter is deprecated and will be removed in the 
> next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.LoggingMessageFormatter instead");
> return LoggingMessageFormatter.class.getName();
> case "kafka.tools.NoOpMessageFormatter":
> System.err.println("WARNING: kafka.tools.NoOpMessageFormatter 
> is deprecated and will be removed in the next major release. " +
> "Please use 
> org.apache.kafka.tools.consumer.NoOpMessageFormatter instead");
> return NoOpMessageFormatter.class.getName();
> default:
> return className;
> }
> }
> {code}
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-16830:
---
Description: 
https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353

 

Those deprecated formatters "strings" should be removed from 4.0.0

  was:
[https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L72]

 

Those deprecated formatters "strings" should be removed from 4.0.0


> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L353
>  
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16830) Remove the scala version formatters support

2024-06-21 Thread Ksolves (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856730#comment-17856730
 ] 

Ksolves commented on KAFKA-16830:
-

[~chia7712] Seems like the class is updated as L72 is showing a blank line. Can 
you update the GitHub link, please?

> Remove the scala version formatters support
> ---
>
> Key: KAFKA-16830
> URL: https://issues.apache.org/jira/browse/KAFKA-16830
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 4.0.0
>
>
> [https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/consumer/ConsoleConsumerOptions.java#L72]
>  
> Those deprecated formatters "strings" should be removed from 4.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception

2024-06-21 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 updated KAFKA-17015:
---
Description: 
when review PR#16970。 I find function 
`ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be deprecated 
because they have a mutable attribute, which will cause the hashCode to change。 

I don't think that hashCode should be discarded just because it is mutable. 
HashCode is a very important property of an object. It just shouldn't be used 
for hash addressing, like ArayList

 

  was:
when review PR#16416。 I find function 
`ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be deprecated 
because they have a mutable attribute, which will cause the hashCode to change。 

I don't think that hashCode should be discarded just because it is mutable. 
HashCode is a very important property of an object. It just shouldn't be used 
for hash addressing, like ArayList

 


> ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be 
> deprecated and throw an exception
> -
>
> Key: KAFKA-17015
> URL: https://issues.apache.org/jira/browse/KAFKA-17015
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> when review PR#16970。 I find function 
> `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be 
> deprecated because they have a mutable attribute, which will cause the 
> hashCode to change。 
> I don't think that hashCode should be discarded just because it is mutable. 
> HashCode is a very important property of an object. It just shouldn't be used 
> for hash addressing, like ArayList
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16855) KRaft - Wire replaying a TopicRecord

2024-06-21 Thread Muralidhar Basani (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856726#comment-17856726
 ] 

Muralidhar Basani commented on KAFKA-16855:
---

[~showuon] yes, I will look into part2 and also like to complete this kip for 
3.9

> KRaft - Wire replaying a TopicRecord
> 
>
> Key: KAFKA-16855
> URL: https://issues.apache.org/jira/browse/KAFKA-16855
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Muralidhar Basani
>Priority: Major
>
> *Summary*
> Replaying a TopicRecord containing a new TieredEpoch and TieredState needs to 
> interact with the two thread pools in the RemoteLogManager to add/remove the 
> correct tasks from each



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16855) KRaft - Wire replaying a TopicRecord

2024-06-21 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856718#comment-17856718
 ] 

Luke Chen edited comment on KAFKA-16855 at 6/21/24 9:59 AM:


[~muralibasani] , are you willing to continue to work on part 2 of this task 
after 16853 is done? We would like to complete this KIP in v3.9.0. Thanks.


was (Author: showuon):
[~muralibasani] , are you willing to continue to work on part 2 of this task?

> KRaft - Wire replaying a TopicRecord
> 
>
> Key: KAFKA-16855
> URL: https://issues.apache.org/jira/browse/KAFKA-16855
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Muralidhar Basani
>Priority: Major
>
> *Summary*
> Replaying a TopicRecord containing a new TieredEpoch and TieredState needs to 
> interact with the two thread pools in the RemoteLogManager to add/remove the 
> correct tasks from each



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16853) Split RemoteLogManagerScheduledThreadPool

2024-06-21 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856717#comment-17856717
 ] 

Luke Chen commented on KAFKA-16853:
---

[~abhijeetkumar] , do you have plan when you will start to work on this? This 
blocks the task of KAFKA-16855. Thanks.

> Split RemoteLogManagerScheduledThreadPool
> -
>
> Key: KAFKA-16853
> URL: https://issues.apache.org/jira/browse/KAFKA-16853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Abhijeet Kumar
>Priority: Major
>
> *Summary*
> To begin with create just the RemoteDataExpirationThreadPool and move 
> expiration to it. Keep all settings as if the only thread pool was the 
> RemoteLogManagerScheduledThreadPool. Ensure that the new thread pool is wired 
> correctly to the RemoteLogManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16855) KRaft - Wire replaying a TopicRecord

2024-06-21 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856718#comment-17856718
 ] 

Luke Chen commented on KAFKA-16855:
---

[~muralibasani] , are you willing to continue to work on part 2 of this task?

> KRaft - Wire replaying a TopicRecord
> 
>
> Key: KAFKA-16855
> URL: https://issues.apache.org/jira/browse/KAFKA-16855
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Muralidhar Basani
>Priority: Major
>
> *Summary*
> Replaying a TopicRecord containing a new TieredEpoch and TieredState needs to 
> interact with the two thread pools in the RemoteLogManager to add/remove the 
> correct tasks from each



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16855) KRaft - Wire replaying a TopicRecord

2024-06-21 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen reassigned KAFKA-16855:
-

Assignee: Muralidhar Basani

> KRaft - Wire replaying a TopicRecord
> 
>
> Key: KAFKA-16855
> URL: https://issues.apache.org/jira/browse/KAFKA-16855
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Muralidhar Basani
>Priority: Major
>
> *Summary*
> Replaying a TopicRecord containing a new TieredEpoch and TieredState needs to 
> interact with the two thread pools in the RemoteLogManager to add/remove the 
> correct tasks from each



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception

2024-06-21 Thread dujian0068 (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856714#comment-17856714
 ] 

dujian0068 commented on KAFKA-17015:


Hello

[~chia7712] Please evaluate whether to change it?

Thank you

> ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be 
> deprecated and throw an exception
> -
>
> Key: KAFKA-17015
> URL: https://issues.apache.org/jira/browse/KAFKA-17015
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> when review PR#16416。 I find function 
> `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be 
> deprecated because they have a mutable attribute, which will cause the 
> hashCode to change。 
> I don't think that hashCode should be discarded just because it is mutable. 
> HashCode is a very important property of an object. It just shouldn't be used 
> for hash addressing, like ArayList
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception

2024-06-21 Thread dujian0068 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dujian0068 reassigned KAFKA-17015:
--

Assignee: dujian0068

> ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be 
> deprecated and throw an exception
> -
>
> Key: KAFKA-17015
> URL: https://issues.apache.org/jira/browse/KAFKA-17015
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dujian0068
>Assignee: dujian0068
>Priority: Minor
>
> when review PR#16416。 I find function 
> `ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be 
> deprecated because they have a mutable attribute, which will cause the 
> hashCode to change。 
> I don't think that hashCode should be discarded just because it is mutable. 
> HashCode is a very important property of an object. It just shouldn't be used 
> for hash addressing, like ArayList
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17015) ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be deprecated and throw an exception

2024-06-21 Thread dujian0068 (Jira)
dujian0068 created KAFKA-17015:
--

 Summary: 
ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() Should not be 
deprecated and throw an exception
 Key: KAFKA-17015
 URL: https://issues.apache.org/jira/browse/KAFKA-17015
 Project: Kafka
  Issue Type: Improvement
Reporter: dujian0068


when review PR#16416。 I find function 
`ContextualRecord#hashCode()、ProcessorRecordContext#hashCode() ` be deprecated 
because they have a mutable attribute, which will cause the hashCode to change。 

I don't think that hashCode should be discarded just because it is mutable. 
HashCode is a very important property of an object. It just shouldn't be used 
for hash addressing, like ArayList

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16850) KRaft - Add v2 of TopicRecord

2024-06-21 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen reassigned KAFKA-16850:
-

Assignee: Luke Chen

> KRaft - Add v2 of TopicRecord
> -
>
> Key: KAFKA-16850
> URL: https://issues.apache.org/jira/browse/KAFKA-16850
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15146) Flaky test ConsumerBounceTest.testConsumptionWithBrokerFailures

2024-06-21 Thread Igor Soarez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856702#comment-17856702
 ] 

Igor Soarez commented on KAFKA-15146:
-

[~chiacyu] I've assigned this to you. Thanks for helping! 

> Flaky test ConsumerBounceTest.testConsumptionWithBrokerFailures
> ---
>
> Key: KAFKA-15146
> URL: https://issues.apache.org/jira/browse/KAFKA-15146
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Reporter: Divij Vaidya
>Assignee: Chia Chuan Yu
>Priority: Major
>  Labels: flaky-test
>
> Flaky test that fails with the following error. Example build - 
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13953/2] 
> {noformat}
> Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
> ConsumerBounceTest > testConsumptionWithBrokerFailures() FAILED
> org.apache.kafka.clients.consumer.CommitFailedException: Offset commit 
> cannot be completed since the consumer is not part of an active group for 
> auto partition assignment; it is likely that the consumer was kicked out of 
> the group.
> at 
> app//org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1351)
> at 
> app//org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1188)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1518)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1417)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1374)
> at 
> app//kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:109)
> at 
> app//kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:81){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15146) Flaky test ConsumerBounceTest.testConsumptionWithBrokerFailures

2024-06-21 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez reassigned KAFKA-15146:
---

Assignee: Chia Chuan Yu

> Flaky test ConsumerBounceTest.testConsumptionWithBrokerFailures
> ---
>
> Key: KAFKA-15146
> URL: https://issues.apache.org/jira/browse/KAFKA-15146
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Reporter: Divij Vaidya
>Assignee: Chia Chuan Yu
>Priority: Major
>  Labels: flaky-test
>
> Flaky test that fails with the following error. Example build - 
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13953/2] 
> {noformat}
> Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
> ConsumerBounceTest > testConsumptionWithBrokerFailures() FAILED
> org.apache.kafka.clients.consumer.CommitFailedException: Offset commit 
> cannot be completed since the consumer is not part of an active group for 
> auto partition assignment; it is likely that the consumer was kicked out of 
> the group.
> at 
> app//org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1351)
> at 
> app//org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1188)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1518)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1417)
> at 
> app//org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1374)
> at 
> app//kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:109)
> at 
> app//kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:81){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-21 Thread Dmitry Werner (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-17008 ]


Dmitry Werner deleted comment on KAFKA-17008:
---

was (Author: JIRAUSER300605):
[~arushi315] Hello, can I take the task?

> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944
> 
>
> Key: KAFKA-17008
> URL: https://issues.apache.org/jira/browse/KAFKA-17008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arushi Helms
>Priority: Major
>
> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
> I could not find an existing ticket for this, if there one then please mark 
> this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-17008.

Resolution: Duplicate

> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944
> 
>
> Key: KAFKA-17008
> URL: https://issues.apache.org/jira/browse/KAFKA-17008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arushi Helms
>Priority: Major
>
> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
> I could not find an existing ticket for this, if there one then please mark 
> this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-21 Thread Mickael Maison (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856698#comment-17856698
 ] 

Mickael Maison commented on KAFKA-17008:


We're already using ZooKeeper 3.8.4. This has been addressed in 
https://issues.apache.org/jira/browse/KAFKA-16347.

> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944
> 
>
> Key: KAFKA-17008
> URL: https://issues.apache.org/jira/browse/KAFKA-17008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arushi Helms
>Priority: Major
>
> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
> I could not find an existing ticket for this, if there one then please mark 
> this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17008) Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944

2024-06-21 Thread Dmitry Werner (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856694#comment-17856694
 ] 

Dmitry Werner commented on KAFKA-17008:
---

[~arushi315] Hello, can I take the task?

> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944
> 
>
> Key: KAFKA-17008
> URL: https://issues.apache.org/jira/browse/KAFKA-17008
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arushi Helms
>Priority: Major
>
> Update zookeeper to 3.8.4 or 3.9.2 to address CVE-2024-23944.
> I could not find an existing ticket for this, if there one then please mark 
> this as duplicate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17012) Enable testMeasureCommitSyncDuration, testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, testMeasureCommittedDuration, testOffsetsForTimesTimeout, testBeg

2024-06-21 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-17012:
--

Assignee: xuanzhang gong  (was: Chia-Ping Tsai)

> Enable testMeasureCommitSyncDuration, testMeasureCommittedDurationOnFailure, 
> testInvalidGroupMetadata, testMeasureCommittedDuration, 
> testOffsetsForTimesTimeout, testBeginningOffsetsTimeout and 
> testEndOffsetsTimeout for AsyncConsumer
> 
>
> Key: KAFKA-17012
> URL: https://issues.apache.org/jira/browse/KAFKA-17012
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: xuanzhang gong
>Priority: Minor
>
> just test my fingers - it seems "testMeasureCommitSyncDuration, 
> testMeasureCommittedDurationOnFailure, testInvalidGroupMetadata, 
> testMeasureCommittedDuration, testOffsetsForTimesTimeout, 
> testBeginningOffsetsTimeout, testEndOffsetsTimeout" can work with 
> AsyncConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)