[jira] [Updated] (KAFKA-17041) Add pagination when describe large set of metadata via Admin API

2024-07-03 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-17041:
--
Issue Type: Improvement  (was: Task)

> Add pagination when describe large set of metadata via Admin API 
> -
>
> Key: KAFKA-17041
> URL: https://issues.apache.org/jira/browse/KAFKA-17041
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Omnia Ibrahim
>Assignee: Omnia Ibrahim
>Priority: Major
>
> Some of the request via Admin API timeout on large cluster or cluster with 
> large set of specific metadata. For example OffsetFetchRequest and 
> DescribeLogDirsRequest timeout due to large number of partition on cluster. 
> Also DescribeProducersRequest and ListTransactionsRequest time out due to too 
> many short lived PID or too many hanging transactions
> [KIP-1062: Introduce Pagination for some requests used by Admin 
> API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-17041) Add pagination when describe large set of metadata via Admin API

2024-07-03 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-17041:
-

Assignee: Omnia Ibrahim

> Add pagination when describe large set of metadata via Admin API 
> -
>
> Key: KAFKA-17041
> URL: https://issues.apache.org/jira/browse/KAFKA-17041
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Reporter: Omnia Ibrahim
>Assignee: Omnia Ibrahim
>Priority: Major
>
> Some of the request via Admin API timeout on large cluster or cluster with 
> large set of specific metadata. For example OffsetFetchRequest and 
> DescribeLogDirsRequest timeout due to large number of partition on cluster. 
> Also DescribeProducersRequest and ListTransactionsRequest time out due to too 
> many short lived PID or too many hanging transactions
> [KIP-1062: Introduce Pagination for some requests used by Admin 
> API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-17041) Add pagination when describe large set of metadata via Admin API

2024-06-26 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-17041:
--
Description: 
Some of the request via Admin API timeout on large cluster or cluster with 
large set of specific metadata. For example OffsetFetchRequest and 
DescribeLogDirsRequest timeout due to large number of partition on cluster. 
Also DescribeProducersRequest and ListTransactionsRequest time out due to too 
many short lived PID or too many hanging transactions

[KIP-1062: Introduce Pagination for some requests used by Admin 
API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]

  was:
Some of the request via Admin API do timeout on large cluster or cluster with 
too many metadata. For example OffsetFetchRequest and DescribeLogDirsRequest 
timeout due to large number of partition on cluster. Also 
DescribeProducersRequest and ListTransactionsRequest time out due to too many 
short lived PID or too many hanging transactions

[KIP-1062: Introduce Pagination for some requests used by Admin 
API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]


> Add pagination when describe large set of metadata via Admin API 
> -
>
> Key: KAFKA-17041
> URL: https://issues.apache.org/jira/browse/KAFKA-17041
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Reporter: Omnia Ibrahim
>Priority: Major
>
> Some of the request via Admin API timeout on large cluster or cluster with 
> large set of specific metadata. For example OffsetFetchRequest and 
> DescribeLogDirsRequest timeout due to large number of partition on cluster. 
> Also DescribeProducersRequest and ListTransactionsRequest time out due to too 
> many short lived PID or too many hanging transactions
> [KIP-1062: Introduce Pagination for some requests used by Admin 
> API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17041) Add pagination when describe large set of metadata via Admin API

2024-06-26 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17860142#comment-17860142
 ] 

Omnia Ibrahim commented on KAFKA-17041:
---

Hi [~bmilk] the KIP still a draft so it is not ready yet to be picked up. 

> Add pagination when describe large set of metadata via Admin API 
> -
>
> Key: KAFKA-17041
> URL: https://issues.apache.org/jira/browse/KAFKA-17041
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Reporter: Omnia Ibrahim
>Priority: Major
>
> Some of the request via Admin API do timeout on large cluster or cluster with 
> too many metadata. For example OffsetFetchRequest and DescribeLogDirsRequest 
> timeout due to large number of partition on cluster. Also 
> DescribeProducersRequest and ListTransactionsRequest time out due to too many 
> short lived PID or too many hanging transactions
> [KIP-1062: Introduce Pagination for some requests used by Admin 
> API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-17041) Add pagination when describe large set of metadata via Admin API

2024-06-26 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17860142#comment-17860142
 ] 

Omnia Ibrahim edited comment on KAFKA-17041 at 6/26/24 12:13 PM:
-

Hi [~bmilk] the KIP still a draft so it is not ready yet to be picked up. So 
for the time being I'll keep it unassigned or assigned to me until the KIP get 
voted. 


was (Author: omnia_h_ibrahim):
Hi [~bmilk] the KIP still a draft so it is not ready yet to be picked up. 

> Add pagination when describe large set of metadata via Admin API 
> -
>
> Key: KAFKA-17041
> URL: https://issues.apache.org/jira/browse/KAFKA-17041
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Reporter: Omnia Ibrahim
>Priority: Major
>
> Some of the request via Admin API do timeout on large cluster or cluster with 
> too many metadata. For example OffsetFetchRequest and DescribeLogDirsRequest 
> timeout due to large number of partition on cluster. Also 
> DescribeProducersRequest and ListTransactionsRequest time out due to too many 
> short lived PID or too many hanging transactions
> [KIP-1062: Introduce Pagination for some requests used by Admin 
> API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17041) Add pagination when describe large set of metadata via Admin API

2024-06-26 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-17041:
-

 Summary: Add pagination when describe large set of metadata via 
Admin API 
 Key: KAFKA-17041
 URL: https://issues.apache.org/jira/browse/KAFKA-17041
 Project: Kafka
  Issue Type: Task
  Components: admin
Reporter: Omnia Ibrahim


Some of the request via Admin API do timeout on large cluster or cluster with 
too many metadata. For example OffsetFetchRequest and DescribeLogDirsRequest 
timeout due to large number of partition on cluster. Also 
DescribeProducersRequest and ListTransactionsRequest time out due to too many 
short lived PID or too many hanging transactions

[KIP-1062: Introduce Pagination for some requests used by Admin 
API|https://cwiki.apache.org/confluence/display/KAFKA/KIP-1062%3A+Introduce+Pagination+for+some+requests+used+by+Admin+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16638) Align the naming convention for config and default variables in *Config classes

2024-06-07 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853083#comment-17853083
 ] 

Omnia Ibrahim edited comment on KAFKA-16638 at 6/7/24 9:21 AM:
---

Hi [~ericlu95] Yeah I wanted to update this Jira to remove the public apis from 
it but totally forgot. I think it would be hard to change the public ones This 
why I kept it open for discussions. Especially that they are used by App so 
this will be a very distributive change to do. It would be nice but unfornatly 
we can't change these anymore! 


was (Author: omnia_h_ibrahim):
Hi [~ericlu95] Yeah I wanted to update this Jira to remove the public apis from 
it but totally forgot. I think it would be hard to change the public ones as 
they are used by App so this will be a very distributive change to do. It would 
be nice but unfornatly we can't change these anymore!

> Align the naming convention for config and default variables in *Config 
> classes
> ---
>
> Key: KAFKA-16638
> URL: https://issues.apache.org/jira/browse/KAFKA-16638
> Project: Kafka
>  Issue Type: Task
>Reporter: Omnia Ibrahim
>Priority: Trivial
>
> Some classes in the code is violating the naming naming convention for 
> config, doc, and default variables which is:
>  * `_CONFIG` suffix for defining the configuration
>  * `_DEFAULT` suffix for default value 
>  * `_DOC` suffix for doc
> The following classes need to be updated 
>  * `CleanerConfig` and `RemoteLogManagerConfig` to use  `_CONFIG` suffix 
> instead of `_PROP`.
>  * Others like `LogConfig` and `QuorumConfig` to use `_DEFAULT` suffix 
> instead of `DEFAULT_` prefix .
>  * same goes with `CommonClientConfigs`, `StreamsConfig` however these are 
> public interfaces and will need a KIP to rename the default value variables 
> and mark the old one as deprecated. This might need to be broken to different 
> Jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16638) Align the naming convention for config and default variables in *Config classes

2024-06-07 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853083#comment-17853083
 ] 

Omnia Ibrahim commented on KAFKA-16638:
---

Hi [~ericlu95] Yeah I wanted to update this Jira to remove the public apis from 
it but totally forgot. I think it would be hard to change the public ones as 
they are used by App so this will be a very distributive change to do. It would 
be nice but unfornatly we can't change these anymore!

> Align the naming convention for config and default variables in *Config 
> classes
> ---
>
> Key: KAFKA-16638
> URL: https://issues.apache.org/jira/browse/KAFKA-16638
> Project: Kafka
>  Issue Type: Task
>Reporter: Omnia Ibrahim
>Priority: Trivial
>
> Some classes in the code is violating the naming naming convention for 
> config, doc, and default variables which is:
>  * `_CONFIG` suffix for defining the configuration
>  * `_DEFAULT` suffix for default value 
>  * `_DOC` suffix for doc
> The following classes need to be updated 
>  * `CleanerConfig` and `RemoteLogManagerConfig` to use  `_CONFIG` suffix 
> instead of `_PROP`.
>  * Others like `LogConfig` and `QuorumConfig` to use `_DEFAULT` suffix 
> instead of `DEFAULT_` prefix .
>  * same goes with `CommonClientConfigs`, `StreamsConfig` however these are 
> public interfaces and will need a KIP to rename the default value variables 
> and mark the old one as deprecated. This might need to be broken to different 
> Jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16254) Allow MM2 to fully disable offset sync feature

2024-05-20 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-16254:
--
Fix Version/s: 3.8.0

> Allow MM2 to fully disable offset sync feature
> --
>
> Key: KAFKA-16254
> URL: https://issues.apache.org/jira/browse/KAFKA-16254
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.5.0, 3.6.0, 3.7.0
>Reporter: Omnia Ibrahim
>Assignee: Omnia Ibrahim
>Priority: Major
>  Labels: need-kip
> Fix For: 3.8.0
>
>
> *Background:* 
> At the moment syncing offsets feature in MM2 is broken to 2 parts
>  # One is in `MirrorSourceTask` where we store the new recored's offset on 
> target cluster to {{offset_syncs}} internal topic after mirroring the record. 
> Before KAFKA-14610 in 3.5 MM2 used to just queue the offsets and publish them 
> later but since 3.5 this behaviour changed we now publish any offset syncs 
> that we've queued up, but have not yet been able to publish when 
> `MirrorSourceTask.commit` get invoked. This introduced an over head to commit 
> process.
>  # The second part is in checkpoints source task where we use the new record 
> offsets from {{offset_syncs}} and update {{checkpoints}} and 
> {{__consumer_offsets}} topics.
> *Problem:*
> For customers who only use MM2 for mirroring data and not interested in 
> syncing offsets feature they now can disable the second part of this feature 
> which is by disabling {{emit.checkpoints.enabled}} and/or 
> {{sync.group.offsets.enabled}} to disable emitting {{__consumer_offsets}} 
> topic but nothing disabling 1st part of the feature. 
> The problem get worse if they disabled MM2 from creating offset syncs 
> internal topic as 
> 1. this will increase throughput as MM2 will try to force trying to update 
> the offset with every mirrored batch which impacting the performance of our 
> MM2.
> 2. Get too many error logs because they don't create the sync offset topic as 
> they don't use the feature.
> *Possible solution:*
> Allow customers to fully disable the feature if they don't really need it 
> similar to how we fully can disable other MM2 features like heartbeat feature 
> by adding a new config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16638) Align the naming convention for config and default variables in *Config classes

2024-04-28 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-16638:
-

 Summary: Align the naming convention for config and default 
variables in *Config classes
 Key: KAFKA-16638
 URL: https://issues.apache.org/jira/browse/KAFKA-16638
 Project: Kafka
  Issue Type: Task
Reporter: Omnia Ibrahim


Some classes in the code is violating the naming naming convention for config, 
doc, and default variables which is:
 * `_CONFIG` suffix for defining the configuration
 * `_DEFAULT` suffix for default value 
 * `_DOC` suffix for doc

The following classes need to be updated 
 * `CleanerConfig` and `RemoteLogManagerConfig` to use  `_CONFIG` suffix 
instead of `_PROP`.
 * Others like `LogConfig` and `QuorumConfig` to use `_DEFAULT` suffix instead 
of `DEFAULT_` prefix .
 * same goes with `CommonClientConfigs`, `StreamsConfig` however these are 
public interfaces and will need a KIP to rename the default value variables and 
mark the old one as deprecated. This might need to be broken to different Jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16212) Cache partitions by TopicIdPartition instead of TopicPartition

2024-04-25 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840715#comment-17840715
 ] 

Omnia Ibrahim commented on KAFKA-16212:
---

I believe that https://issues.apache.org/jira/browse/KAFKA-10551 (and possibly 
KAFKA-10549) needs to be done first as  ProoduceRequest and Respond interact 
with replicaManager cache to append log and it would be easier if this produce 
request path is already topic ID aware before updating the cache. I am 
prioritising KAFKA-10551 

> Cache partitions by TopicIdPartition instead of TopicPartition
> --
>
> Key: KAFKA-16212
> URL: https://issues.apache.org/jira/browse/KAFKA-16212
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Omnia Ibrahim
>Priority: Major
>
> From the discussion in [PR 
> 15263|https://github.com/apache/kafka/pull/15263#discussion_r1471075201], it 
> would be better to cache {{allPartitions}} by {{TopicIdPartition}} instead of 
> {{TopicPartition}} to avoid ambiguity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15089) Consolidate all the group coordinator configs

2024-04-22 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839744#comment-17839744
 ] 

Omnia Ibrahim commented on KAFKA-15089:
---

I can have a look once I finish KAFKA-15853

> Consolidate all the group coordinator configs
> -
>
> Key: KAFKA-15089
> URL: https://issues.apache.org/jira/browse/KAFKA-15089
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Priority: Major
>
> The group coordinator configurations are defined in KafkaConfig at the 
> moment. As KafkaConfig is defined in the core module, we can't pass it to the 
> new java modules to pass the configurations along.
> A suggestion here is to centralize all the configurations of a module in the 
> module itself similarly to what we have do for RemoteLogManagerConfig and 
> RaftConfig. We also need a mechanism to add all the properties defined in the 
> module to the KafkaConfig's ConfigDef.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-10551) Support topic IDs in Produce request

2024-04-22 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-10551:
-

Assignee: Omnia Ibrahim

> Support topic IDs in Produce request
> 
>
> Key: KAFKA-10551
> URL: https://issues.apache.org/jira/browse/KAFKA-10551
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Justine Olshan
>Assignee: Omnia Ibrahim
>Priority: Major
>
> Replace the topic name with the topic ID so the request is smaller.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-8041) Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll

2024-04-18 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838583#comment-17838583
 ] 

Omnia Ibrahim edited comment on KAFKA-8041 at 4/18/24 10:14 AM:


[~soarez] Yes, sorry for the late respond. I believe this should be fixed now 
after the merge of [https://github.com/apache/kafka/pull/15335] . It has been 
passing for the last couple of weeks with no flakiness 
[https://ge.apache.org/scans/tests?search.names=Git%20branch=P28D=kafka=America%2FLos_Angeles=trunk=kafka.server.LogDirFailureTest=testIOExceptionDuringLogRoll(String)%5B2%5D
 
|https://ge.apache.org/scans/tests?search.names=Git%20branch=P28D=kafka=America%2FLos_Angeles=trunk=kafka.server.LogDirFailureTest=testIOExceptionDuringLogRoll(String)%5B2%5D]


was (Author: omnia_h_ibrahim):
[~soarez] I believe this should be fixed now after the merge of 
[https://github.com/apache/kafka/pull/15335] . It has been passing for the last 
couple of weeks with no flakiness 
[https://ge.apache.org/scans/tests?search.names=Git%20branch=P28D=kafka=America%2FLos_Angeles=trunk=kafka.server.LogDirFailureTest=testIOExceptionDuringLogRoll(String)%5B2%5D
 
|https://ge.apache.org/scans/tests?search.names=Git%20branch=P28D=kafka=America%2FLos_Angeles=trunk=kafka.server.LogDirFailureTest=testIOExceptionDuringLogRoll(String)%5B2%5D]

> Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll
> -
>
> Key: KAFKA-8041
> URL: https://issues.apache.org/jira/browse/KAFKA-8041
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Bob Barrett
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/236/tests]
> {quote}java.lang.AssertionError: Expected some messages
> at kafka.utils.TestUtils$.fail(TestUtils.scala:357)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:787)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:189)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63){quote}
> STDOUT
> {quote}[2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition topic-6 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-10 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-4 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-8 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:45:00,248] ERROR Error while rolling log segment for topic-0 
> in dir 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216
>  (kafka.server.LogDirFailureChannel:76)
> java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216/topic-0/.index
>  (Not a directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:121)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at 

[jira] [Commented] (KAFKA-8041) Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll

2024-04-18 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838583#comment-17838583
 ] 

Omnia Ibrahim commented on KAFKA-8041:
--

[~soarez] I believe this should be fixed now after the merge of 
[https://github.com/apache/kafka/pull/15335] . It has been passing for the last 
couple of weeks with no flakiness 
[https://ge.apache.org/scans/tests?search.names=Git%20branch=P28D=kafka=America%2FLos_Angeles=trunk=kafka.server.LogDirFailureTest=testIOExceptionDuringLogRoll(String)%5B2%5D
 
|https://ge.apache.org/scans/tests?search.names=Git%20branch=P28D=kafka=America%2FLos_Angeles=trunk=kafka.server.LogDirFailureTest=testIOExceptionDuringLogRoll(String)%5B2%5D]

> Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll
> -
>
> Key: KAFKA-8041
> URL: https://issues.apache.org/jira/browse/KAFKA-8041
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Bob Barrett
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/236/tests]
> {quote}java.lang.AssertionError: Expected some messages
> at kafka.utils.TestUtils$.fail(TestUtils.scala:357)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:787)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:189)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63){quote}
> STDOUT
> {quote}[2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition topic-6 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-10 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-4 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-8 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:45:00,248] ERROR Error while rolling log segment for topic-0 
> in dir 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216
>  (kafka.server.LogDirFailureChannel:76)
> java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216/topic-0/.index
>  (Not a directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:121)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:115)
> at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:184)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:184)
> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:501)
> at kafka.log.Log.$anonfun$roll$8(Log.scala:1520)
> at kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1520)
> at scala.Option.foreach(Option.scala:257)
> at kafka.log.Log.$anonfun$roll$2(Log.scala:1520)
> at 

[jira] [Commented] (KAFKA-15089) Consolidate all the group coordinator configs

2024-04-18 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838562#comment-17838562
 ] 

Omnia Ibrahim commented on KAFKA-15089:
---

Hi [~dajac] should we mark this as resolved now as we merged 
[https://github.com/apache/kafka/pull/15684] or is there anything left here to 
do?

> Consolidate all the group coordinator configs
> -
>
> Key: KAFKA-15089
> URL: https://issues.apache.org/jira/browse/KAFKA-15089
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Priority: Major
>
> The group coordinator configurations are defined in KafkaConfig at the 
> moment. As KafkaConfig is defined in the core module, we can't pass it to the 
> new java modules to pass the configurations along.
> A suggestion here is to centralize all the configurations of a module in the 
> module itself similarly to what we have do for RemoteLogManagerConfig and 
> RaftConfig. We also need a mechanism to add all the properties defined in the 
> module to the KafkaConfig's ConfigDef.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-10549) Add topic ID support to ListOffsets, OffsetForLeaders

2024-03-28 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-10549:
-

Assignee: Omnia Ibrahim

> Add topic ID support to ListOffsets, OffsetForLeaders
> -
>
> Key: KAFKA-10549
> URL: https://issues.apache.org/jira/browse/KAFKA-10549
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Justine Olshan
>Assignee: Omnia Ibrahim
>Priority: Major
>
> ListOffsets, OffsetForLeaders protocols will replace topic name with topic ID 
> and will be used to prevent reads from deleted topics
> This may be split into two or more issues if necessary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16234) Log directory failure re-creates partitions in another logdir automatically

2024-03-26 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-16234:
--
Priority: Critical  (was: Major)

> Log directory failure re-creates partitions in another logdir automatically
> ---
>
> Key: KAFKA-16234
> URL: https://issues.apache.org/jira/browse/KAFKA-16234
> Project: Kafka
>  Issue Type: Sub-task
>  Components: jbod
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Omnia Ibrahim
>Priority: Critical
> Fix For: 3.7.1
>
>
> With [KAFKA-16157|https://github.com/apache/kafka/pull/15263] we made changes 
> in {{HostedPartition.Offline}} enum variant to embed {{Partition}} object. 
> Further, {{ReplicaManager::getOrCreatePartition}} tries to compare the old 
> and new topicIds to decide if it needs to create a new log.
> The getter for {{Partition::topicId}} relies on retrieving the topicId from 
> {{log}} field or {{{}logManager.currentLogs{}}}. The former is set to 
> {{None}} when a partition is marked offline and the key for the partition is 
> removed from the latter by {{{}LogManager::handleLogDirFailure{}}}. 
> Therefore, topicId for a partitioned marked offline always returns {{None}} 
> and new logs for all partitions in a failed log directory are always created 
> on another disk.
> The broker will fail to restart after the failed disk is repaired because 
> same partitions will occur in two different directories. The error does 
> however inform the operator to remove the partitions from the disk that 
> failed which should help with broker startup.
> We can avoid this with KAFKA-16212 but in the short-term, an immediate 
> solution can be to have {{Partition}} object accept {{Option[TopicId]}} in 
> it's constructor and have it fallback to {{log}} or {{logManager}} if it's 
> unset.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15899) Move kafka.security package from core to server module

2024-03-21 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829527#comment-17829527
 ] 

Omnia Ibrahim commented on KAFKA-15899:
---

[~nizhikov] sure I just assigned it to you 

> Move kafka.security package from core to server module
> --
>
> Key: KAFKA-15899
> URL: https://issues.apache.org/jira/browse/KAFKA-15899
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Nikolay Izhikov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15899) Move kafka.security package from core to server module

2024-03-21 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15899:
-

Assignee: Nikolay Izhikov  (was: Omnia Ibrahim)

> Move kafka.security package from core to server module
> --
>
> Key: KAFKA-15899
> URL: https://issues.apache.org/jira/browse/KAFKA-15899
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Nikolay Izhikov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16293) Test log directory failure in Kraft

2024-03-12 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-16293:
--
Fix Version/s: 3.7.1

> Test log directory failure in Kraft
> ---
>
> Key: KAFKA-16293
> URL: https://issues.apache.org/jira/browse/KAFKA-16293
> Project: Kafka
>  Issue Type: Test
>  Components: jbod, kraft
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Gaurav Narula
>Priority: Major
> Fix For: 3.7.1
>
>
> We should update the log directory failure system test to run in Kraft mode 
> following the work for KIP 858.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16234) Log directory failure re-creates partitions in another logdir automatically

2024-03-12 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-16234:
--
Fix Version/s: 3.7.1

> Log directory failure re-creates partitions in another logdir automatically
> ---
>
> Key: KAFKA-16234
> URL: https://issues.apache.org/jira/browse/KAFKA-16234
> Project: Kafka
>  Issue Type: Sub-task
>  Components: jbod
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Omnia Ibrahim
>Priority: Major
> Fix For: 3.7.1
>
>
> With [KAFKA-16157|https://github.com/apache/kafka/pull/15263] we made changes 
> in {{HostedPartition.Offline}} enum variant to embed {{Partition}} object. 
> Further, {{ReplicaManager::getOrCreatePartition}} tries to compare the old 
> and new topicIds to decide if it needs to create a new log.
> The getter for {{Partition::topicId}} relies on retrieving the topicId from 
> {{log}} field or {{{}logManager.currentLogs{}}}. The former is set to 
> {{None}} when a partition is marked offline and the key for the partition is 
> removed from the latter by {{{}LogManager::handleLogDirFailure{}}}. 
> Therefore, topicId for a partitioned marked offline always returns {{None}} 
> and new logs for all partitions in a failed log directory are always created 
> on another disk.
> The broker will fail to restart after the failed disk is repaired because 
> same partitions will occur in two different directories. The error does 
> however inform the operator to remove the partitions from the disk that 
> failed which should help with broker startup.
> We can avoid this with KAFKA-16212 but in the short-term, an immediate 
> solution can be to have {{Partition}} object accept {{Option[TopicId]}} in 
> it's constructor and have it fallback to {{log}} or {{logManager}} if it's 
> unset.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16365) AssignmentsManager mismanages completion notifications

2024-03-12 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-16365:
--
Affects Version/s: 3.7.0

> AssignmentsManager mismanages completion notifications
> --
>
> Key: KAFKA-16365
> URL: https://issues.apache.org/jira/browse/KAFKA-16365
> Project: Kafka
>  Issue Type: Sub-task
>  Components: jbod
>Affects Versions: 3.7.0
>Reporter: Igor Soarez
>Assignee: Igor Soarez
>Priority: Major
> Fix For: 3.7.1
>
>
> When moving replicas between directories in the same broker, future replica 
> promotion hinges on acknowledgment from the controller of a change in the 
> directory assignment.
>  
> ReplicaAlterLogDirsThread relies on AssignmentsManager for a completion 
> notification of the directory assignment change.
>  
> In its current form, under certain assignment scheduling, AssignmentsManager 
> both miss completion notifications, or prematurely trigger them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16365) AssignmentsManager mismanages completion notifications

2024-03-12 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-16365:
--
Fix Version/s: 3.7.1

> AssignmentsManager mismanages completion notifications
> --
>
> Key: KAFKA-16365
> URL: https://issues.apache.org/jira/browse/KAFKA-16365
> Project: Kafka
>  Issue Type: Sub-task
>  Components: jbod
>Reporter: Igor Soarez
>Assignee: Igor Soarez
>Priority: Major
> Fix For: 3.7.1
>
>
> When moving replicas between directories in the same broker, future replica 
> promotion hinges on acknowledgment from the controller of a change in the 
> directory assignment.
>  
> ReplicaAlterLogDirsThread relies on AssignmentsManager for a completion 
> notification of the directory assignment change.
>  
> In its current form, under certain assignment scheduling, AssignmentsManager 
> both miss completion notifications, or prematurely trigger them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16365) AssignmentsManager mismanages completion notifications

2024-03-12 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-16365:
--
Component/s: jbod

> AssignmentsManager mismanages completion notifications
> --
>
> Key: KAFKA-16365
> URL: https://issues.apache.org/jira/browse/KAFKA-16365
> Project: Kafka
>  Issue Type: Sub-task
>  Components: jbod
>Reporter: Igor Soarez
>Assignee: Igor Soarez
>Priority: Major
>
> When moving replicas between directories in the same broker, future replica 
> promotion hinges on acknowledgment from the controller of a change in the 
> directory assignment.
>  
> ReplicaAlterLogDirsThread relies on AssignmentsManager for a completion 
> notification of the directory assignment change.
>  
> In its current form, under certain assignment scheduling, AssignmentsManager 
> both miss completion notifications, or prematurely trigger them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16212) Cache partitions by TopicIdPartition instead of TopicPartition

2024-02-20 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818991#comment-17818991
 ] 

Omnia Ibrahim commented on KAFKA-16212:
---

I don't believe ReplicaManager have significant meaning for topic Id zero or 
null. The code related to KRAFT it assume there will always be a topic Id, 
while other codes that doesn't care about topic Id and interact with 
ReplicaManager either not updated yet or doesn't have topic Id awareness 
design. So theoretically this will simplify proposal #1. 

However we will have to 
1. have validation in varies places to handle topic Id as dummy values. 
2. we might need to revert these dummy value and some of the validations later 
in the future. 

I think if we have been using similar approach in other places then proposal#1 
should be fine. 

With all of that said I have one worry regarding the code readability and 
maintenance as having topic Id as Option/Optional.empty/Null/Zero UUID as dummy 
values in the APIs in different places might be a bit confusing during 
extending the code. Is there an agreement as part of KIP-516 for how long this 
transition state will last before having topic id in most places and how would 
this look like that I need to be aware off during extending ReplicaManager 
cache to be topicId aware?

> Cache partitions by TopicIdPartition instead of TopicPartition
> --
>
> Key: KAFKA-16212
> URL: https://issues.apache.org/jira/browse/KAFKA-16212
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Omnia Ibrahim
>Priority: Major
>
> From the discussion in [PR 
> 15263|https://github.com/apache/kafka/pull/15263#discussion_r1471075201], it 
> would be better to cache {{allPartitions}} by {{TopicIdPartition}} instead of 
> {{TopicPartition}} to avoid ambiguity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16254) Allow MM2 to fully disable offset sync feature

2024-02-14 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-16254:
-

 Summary: Allow MM2 to fully disable offset sync feature
 Key: KAFKA-16254
 URL: https://issues.apache.org/jira/browse/KAFKA-16254
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.6.0, 3.5.0, 3.7.0
Reporter: Omnia Ibrahim
Assignee: Omnia Ibrahim


*Background:* 
At the moment syncing offsets feature in MM2 is broken to 2 parts
 # One is in `MirrorSourceTask` where we store the new recored's offset on 
target cluster to {{offset_syncs}} internal topic after mirroring the record. 
Before KAFKA-14610 in 3.5 MM2 used to just queue the offsets and publish them 
later but since 3.5 this behaviour changed we now publish any offset syncs that 
we've queued up, but have not yet been able to publish when 
`MirrorSourceTask.commit` get invoked. This introduced an over head to commit 
process.
 # The second part is in checkpoints source task where we use the new record 
offsets from {{offset_syncs}} and update {{checkpoints}} and 
{{__consumer_offsets}} topics.

*Problem:*
For customers who only use MM2 for mirroring data and not interested in syncing 
offsets feature they now can disable the second part of this feature which is 
by disabling {{emit.checkpoints.enabled}} and/or {{sync.group.offsets.enabled}} 
to disable emitting {{__consumer_offsets}} topic but nothing disabling 1st part 
of the feature. 

The problem get worse if they disabled MM2 from creating offset syncs internal 
topic as 
1. this will increase throughput as MM2 will try to force trying to update the 
offset with every mirrored batch which impacting the performance of our MM2.
2. Get too many error logs because they don't create the sync offset topic as 
they don't use the feature.

*Possible solution:*
Allow customers to fully disable the feature if they don't really need it 
similar to how we fully can disable other MM2 features like heartbeat feature 
by adding a new config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16212) Cache partitions by TopicIdPartition instead of TopicPartition

2024-02-14 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817425#comment-17817425
 ] 

Omnia Ibrahim commented on KAFKA-16212:
---

Actually there is another proposal 4# which is to wait till we drop ZK before 
handling this Jira this will get us to simply replace TopicPartition by 
TopicIdPrtition as no were in the code we will have situation where topic id 
might be none. 

> Cache partitions by TopicIdPartition instead of TopicPartition
> --
>
> Key: KAFKA-16212
> URL: https://issues.apache.org/jira/browse/KAFKA-16212
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Omnia Ibrahim
>Priority: Major
>
> From the discussion in [PR 
> 15263|https://github.com/apache/kafka/pull/15263#discussion_r1471075201], it 
> would be better to cache {{allPartitions}} by {{TopicIdPartition}} instead of 
> {{TopicPartition}} to avoid ambiguity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16212) Cache partitions by TopicIdPartition instead of TopicPartition

2024-02-14 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817412#comment-17817412
 ] 

Omnia Ibrahim commented on KAFKA-16212:
---

The moment the cache in `ReplicaManager.allPartitions` is represented as 
`Pool[TopicPartition, HostedPartition]` which is a wrapper around 
`ConcurrentHashMap[opicPartition, HostedPartition]` update this to use 
`TopicIdPartition` as a key turn out to be tricky as 
 # not all APIs that interact with `ReplicaManager` in order to fetch/update 
partition cache are aware of topicId like consumer coordinator, handling some 
requests in KafkaApi where the request schema doesn't have topicId, etc .
 # TopicId is represented as Optional in many places which means we might endup 
populate it with null or dummy uuid multiple times to construct 
TopicIdPartition. 



I have 3 proposals at the moment:
 * *Proposal #1 :* Update TopicIdPartitions to have constructor with topicId as 
optional. And change `ReplicaManager.allPartitions` to be 
`LinkedBlockingQueue[TopicIdPartition, HostedPartition]`. _*This might be the 
simplest one as far as I can see.*_ 
 ** any API that is not topic id aware will just get the last entry that match 
topicIdPartition.topicPartition.
 ** The code will need to make sure that we don't have duplicates by 
`TopicIdPartition` in the `LinkedBlockingQueue`.
 ** We will need to revert having topic Id as optional in TopicIdPartitions 
once everywhere in Kafka is topic-id aware.


 * *Proposal #2 :* change `ReplicaManager.allPartitions` to `new 
Pool[TopicPartition, LinkedBlockingQueue[(Option[Uuid], HostedPartition)]]` 
where `Option[Uuid]` represent topic id. This make the cache scheme bit 
complex. The proposal will 

 ** consider the last entry in `LinkedBlockingQueue` is the current value.
 ** The code will make sure that `LinkedBlockingQueue` has only entry for the 
same topic id 
 ** Topic Id aware APIs that need to fetch/update the partition will be updated 
to use `TopicPartition` and topic Id
 ** Topic Id non-aware APIs will remain using topic partitions and the 
replicaManager will assume that these APIs referring to the last entry in 
`LinkedBlockingQueue`


 * *Proposal#3:* The other option is to keep two separate caches one 
`Pool[TopicIdPartition, HostedPartition]` for partitions and another one 
`Pool[TopicPartition, Uuid]` for the last assigned topic id for each partition 
in order to form `TopicIdPartition`. This is the least favourite as having 2 
caches will risk that one of them can go out of data at any time.


[~jolshan] Do you have any strong preferences? I am leaning toward 1st as it is 
less messy than the others. WDYT?

> Cache partitions by TopicIdPartition instead of TopicPartition
> --
>
> Key: KAFKA-16212
> URL: https://issues.apache.org/jira/browse/KAFKA-16212
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Omnia Ibrahim
>Priority: Major
>
> From the discussion in [PR 
> 15263|https://github.com/apache/kafka/pull/15263#discussion_r1471075201], it 
> would be better to cache {{allPartitions}} by {{TopicIdPartition}} instead of 
> {{TopicPartition}} to avoid ambiguity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16225) Flaky test suite LogDirFailureTest

2024-02-12 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-16225:
-

Assignee: Omnia Ibrahim

> Flaky test suite LogDirFailureTest
> --
>
> Key: KAFKA-16225
> URL: https://issues.apache.org/jira/browse/KAFKA-16225
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Reporter: Greg Harris
>Assignee: Omnia Ibrahim
>Priority: Major
>  Labels: flaky-test
>
> I see this failure on trunk and in PR builds for multiple methods in this 
> test suite:
> {noformat}
> org.opentest4j.AssertionFailedError: expected:  but was:     
> at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>     
> at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>     
> at org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)    
> at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)    
> at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)    
> at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:179)    
> at kafka.utils.TestUtils$.causeLogDirFailure(TestUtils.scala:1715)    
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:186)
>     
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:70){noformat}
> It appears this assertion is failing
> [https://github.com/apache/kafka/blob/f54975c33135140351c50370282e86c49c81bbdd/core/src/test/scala/unit/kafka/utils/TestUtils.scala#L1715]
> The other error which is appearing is this:
> {noformat}
> org.opentest4j.AssertionFailedError: Unexpected exception type thrown, 
> expected:  but was: 
>     
> at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>     
> at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:67)    
> at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:35)    
> at org.junit.jupiter.api.Assertions.assertThrows(Assertions.java:3111)    
> at 
> kafka.server.LogDirFailureTest.testProduceErrorsFromLogDirFailureOnLeader(LogDirFailureTest.scala:164)
>     
> at 
> kafka.server.LogDirFailureTest.testProduceErrorFromFailureOnLogRoll(LogDirFailureTest.scala:64){noformat}
> Failures appear to have started in this commit, but this does not indicate 
> that this commit is at fault: 
> [https://github.com/apache/kafka/tree/3d95a69a28c2d16e96618cfa9a1eb69180fb66ea]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16162) New created topics are unavailable after upgrading to 3.7

2024-02-09 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim resolved KAFKA-16162.
---
Fix Version/s: 3.7.0
   Resolution: Fixed

> New created topics are unavailable after upgrading to 3.7
> -
>
> Key: KAFKA-16162
> URL: https://issues.apache.org/jira/browse/KAFKA-16162
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Luke Chen
>Assignee: Gaurav Narula
>Priority: Blocker
> Fix For: 3.7.0
>
>
> In 3.7, we introduced the KIP-858 JBOD feature, and the brokerRegistration 
> request will include the `LogDirs` fields with UUID for each log dir in each 
> broker. This info will be stored in the controller and used to identify if 
> the log dir is known and online while handling AssignReplicasToDirsRequest 
> [here|https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java#L2093].
>  
> While upgrading from old version, the kafka cluster will run in 3.7 binary 
> with old metadata version, and then upgrade to newer version using 
> kafka-features.sh. That means, while brokers startup and send the 
> brokerRegistration request, it'll be using older metadata version without 
> `LogDirs` fields included. And it makes the controller has no log dir info 
> for all brokers. Later, after upgraded, if new topic is created, the flow 
> will go like this:
> 1. Controller assign replicas and adds in metadata log
> 2. brokers fetch the metadata and apply it
> 3. ReplicaManager#maybeUpdateTopicAssignment will update topic assignment
> 4. After sending ASSIGN_REPLICAS_TO_DIRS to controller with replica 
> assignment, controller will think the log dir in current replica is offline, 
> so triggering offline handler, and reassign leader to another replica, and 
> offline, until no more replicas to assign, so assigning leader to -1 (i.e. no 
> leader) 
> So, the results will be that new created topics are unavailable (with no 
> leader) because the controller thinks all log dir are offline.
> {code:java}
> lukchen@lukchen-mac kafka % bin/kafka-topics.sh --describe --topic 
> quickstart-events3 --bootstrap-server localhost:9092  
> 
> Topic: quickstart-events3 TopicId: s8s6tEQyRvmjKI6ctNTgPg PartitionCount: 
> 3   ReplicationFactor: 3Configs: segment.bytes=1073741824
>   Topic: quickstart-events3   Partition: 0Leader: none
> Replicas: 7,2,6 Isr: 6
>   Topic: quickstart-events3   Partition: 1Leader: none
> Replicas: 2,6,7 Isr: 6
>   Topic: quickstart-events3   Partition: 2Leader: none
> Replicas: 6,7,2 Isr: 6
> {code}
> The log snippet in the controller :
> {code:java}
> # handling 1st assignReplicaToDirs request
> [2024-01-18 19:34:47,370] DEBUG [QuorumController id=1] Broker 6 assigned 
> partition quickstart-events3:0 to OFFLINE dir 7K5JBERyyqFFxIXSXYluJA 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,370] DEBUG [QuorumController id=1] Broker 6 assigned 
> partition quickstart-events3:2 to OFFLINE dir 7K5JBERyyqFFxIXSXYluJA 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,371] DEBUG [QuorumController id=1] Broker 6 assigned 
> partition quickstart-events3:1 to OFFLINE dir 7K5JBERyyqFFxIXSXYluJA 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] 
> offline-dir-assignment: changing partition(s): quickstart-events3-0, 
> quickstart-events3-2, quickstart-events3-1 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] partition change for 
> quickstart-events3-0 with topic ID 6ZIeidfiSTWRiOAmGEwn_g: directories: 
> [AA, AA, AA] -> 
> [7K5JBERyyqFFxIXSXYluJA, AA, AA], 
> partitionEpoch: 0 -> 1 (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] Replayed partition 
> change PartitionChangeRecord(partitionId=0, topicId=6ZIeidfiSTWRiOAmGEwn_g, 
> isr=null, leader=-2, replicas=null, removingReplicas=null, 
> addingReplicas=null, leaderRecoveryState=-1, 
> directories=[7K5JBERyyqFFxIXSXYluJA, AA, 
> AA], eligibleLeaderReplicas=null, lastKnownELR=null) for 
> topic quickstart-events3 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] partition change for 
> quickstart-events3-2 with topic ID 6ZIeidfiSTWRiOAmGEwn_g: directories: 
> [AA, AA, AA] -> 
> [AA, 

[jira] [Commented] (KAFKA-16162) New created topics are unavailable after upgrading to 3.7

2024-02-09 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816037#comment-17816037
 ] 

Omnia Ibrahim commented on KAFKA-16162:
---

Marking this as resolved as the pr was committed

> New created topics are unavailable after upgrading to 3.7
> -
>
> Key: KAFKA-16162
> URL: https://issues.apache.org/jira/browse/KAFKA-16162
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Luke Chen
>Assignee: Gaurav Narula
>Priority: Blocker
> Fix For: 3.7.0
>
>
> In 3.7, we introduced the KIP-858 JBOD feature, and the brokerRegistration 
> request will include the `LogDirs` fields with UUID for each log dir in each 
> broker. This info will be stored in the controller and used to identify if 
> the log dir is known and online while handling AssignReplicasToDirsRequest 
> [here|https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java#L2093].
>  
> While upgrading from old version, the kafka cluster will run in 3.7 binary 
> with old metadata version, and then upgrade to newer version using 
> kafka-features.sh. That means, while brokers startup and send the 
> brokerRegistration request, it'll be using older metadata version without 
> `LogDirs` fields included. And it makes the controller has no log dir info 
> for all brokers. Later, after upgraded, if new topic is created, the flow 
> will go like this:
> 1. Controller assign replicas and adds in metadata log
> 2. brokers fetch the metadata and apply it
> 3. ReplicaManager#maybeUpdateTopicAssignment will update topic assignment
> 4. After sending ASSIGN_REPLICAS_TO_DIRS to controller with replica 
> assignment, controller will think the log dir in current replica is offline, 
> so triggering offline handler, and reassign leader to another replica, and 
> offline, until no more replicas to assign, so assigning leader to -1 (i.e. no 
> leader) 
> So, the results will be that new created topics are unavailable (with no 
> leader) because the controller thinks all log dir are offline.
> {code:java}
> lukchen@lukchen-mac kafka % bin/kafka-topics.sh --describe --topic 
> quickstart-events3 --bootstrap-server localhost:9092  
> 
> Topic: quickstart-events3 TopicId: s8s6tEQyRvmjKI6ctNTgPg PartitionCount: 
> 3   ReplicationFactor: 3Configs: segment.bytes=1073741824
>   Topic: quickstart-events3   Partition: 0Leader: none
> Replicas: 7,2,6 Isr: 6
>   Topic: quickstart-events3   Partition: 1Leader: none
> Replicas: 2,6,7 Isr: 6
>   Topic: quickstart-events3   Partition: 2Leader: none
> Replicas: 6,7,2 Isr: 6
> {code}
> The log snippet in the controller :
> {code:java}
> # handling 1st assignReplicaToDirs request
> [2024-01-18 19:34:47,370] DEBUG [QuorumController id=1] Broker 6 assigned 
> partition quickstart-events3:0 to OFFLINE dir 7K5JBERyyqFFxIXSXYluJA 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,370] DEBUG [QuorumController id=1] Broker 6 assigned 
> partition quickstart-events3:2 to OFFLINE dir 7K5JBERyyqFFxIXSXYluJA 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,371] DEBUG [QuorumController id=1] Broker 6 assigned 
> partition quickstart-events3:1 to OFFLINE dir 7K5JBERyyqFFxIXSXYluJA 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] 
> offline-dir-assignment: changing partition(s): quickstart-events3-0, 
> quickstart-events3-2, quickstart-events3-1 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] partition change for 
> quickstart-events3-0 with topic ID 6ZIeidfiSTWRiOAmGEwn_g: directories: 
> [AA, AA, AA] -> 
> [7K5JBERyyqFFxIXSXYluJA, AA, AA], 
> partitionEpoch: 0 -> 1 (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] Replayed partition 
> change PartitionChangeRecord(partitionId=0, topicId=6ZIeidfiSTWRiOAmGEwn_g, 
> isr=null, leader=-2, replicas=null, removingReplicas=null, 
> addingReplicas=null, leaderRecoveryState=-1, 
> directories=[7K5JBERyyqFFxIXSXYluJA, AA, 
> AA], eligibleLeaderReplicas=null, lastKnownELR=null) for 
> topic quickstart-events3 
> (org.apache.kafka.controller.ReplicationControlManager)
> [2024-01-18 19:34:47,372] DEBUG [QuorumController id=1] partition change for 
> quickstart-events3-2 with topic ID 6ZIeidfiSTWRiOAmGEwn_g: directories: 
> [AA, AA, 

[jira] [Resolved] (KAFKA-14616) Topic recreation with offline broker causes permanent URPs

2024-02-09 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim resolved KAFKA-14616.
---
Fix Version/s: 3.7.0
 Assignee: Colin McCabe
   Resolution: Fixed

> Topic recreation with offline broker causes permanent URPs
> --
>
> Key: KAFKA-14616
> URL: https://issues.apache.org/jira/browse/KAFKA-14616
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.3.1
>Reporter: Omnia Ibrahim
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.7.0
>
>
> We are facing an odd situation when we delete and recreate a topic while 
> broker is offline in KRAFT mode. 
> Here’s what we saw step by step
>  # Created topic {{foo.test}} with 10 partitions and 4 replicas — Topic 
> {{foo.test}} was created with topic ID {{MfuZbwdmSMaiSa0g6__TPg}}
>  # Took broker 4 offline — which held replicas for partitions  {{0, 3, 4, 5, 
> 7, 8, 9}}
>  # Deleted topic {{foo.test}} — The deletion process was successful, despite 
> the fact that broker 4 still held replicas for partitions {{0, 3, 4, 5, 7, 8, 
> 9}} on local disk.
>  # Recreated topic {{foo.test}} with 10 partitions and 4 replicas. — Topic 
> {{foo.test}} was created with topic ID {{RzalpqQ9Q7ub2M2afHxY4Q}} and 
> partitions {{0, 1, 2, 7, 8, 9}} got assigned to broker 4 (which was still 
> offline). Notice here that partitions {{0, 7, 8, 9}} are common between the 
> assignment of the deleted topic ({{{}topic_id: MfuZbwdmSMaiSa0g6__TPg{}}}) 
> and the recreated topic ({{{}topic_id: RzalpqQ9Q7ub2M2afHxY4Q{}}}).
>  # Brough broker 4 back online.
>  # Broker started to create new partition replicas for the recreated topic 
> {{foo.test}} ({{{}topic_id: RzalpqQ9Q7ub2M2afHxY4Q{}}})
>  # The broker hit the following error {{Tried to assign topic ID 
> RzalpqQ9Q7ub2M2afHxY4Q to log for topic partition foo.test-9,but log already 
> contained topic ID MfuZbwdmSMaiSa0g6__TPg}} . As a result of this error the 
> broker decided to rename log dir for partitions {{0, 3, 4, 5, 7, 8, 9}} to 
> {{{}-.-delete{}}}.
>  # Ran {{ls }}
> {code:java}
> foo.test-0.658f87fb9a2e42a590b5d7dcc28862b5-delete/
> foo.test-1/
> foo.test-2/
> foo.test-3.a68f05d05bcc4e579087551b539af311-delete/
> foo.test-4.79ce30a5310d4950ad1b28f226f74895-delete/
> foo.test-5.76ed04da75bf46c3a63342be1eb44450-delete/
> foo.test-6/
> foo.test-7.c2d33db3bf844e9ebbcd9ef22f5270da-delete/
> foo.test-8.33836969ac714b41b69b5334a5068ce0-delete/
> foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/{code}
>       9. Waited until the deletion of the old topic was done and ran {{ls 
> }} again, now we were expecting to see log dir for partitions 
> {{0, 1, 2, 7, 8, 9}} however the result is:
> {code:java}
> foo.test-1/
> foo.test-2/
> foo.test-6/{code}
>      10. Ran {{kafka-topics.sh --command-config cmd.properties 
> --bootstrap-server  --describe --topic foo.test}}
> {code:java}
> Topic: foo.test TopicId: RzalpqQ9Q7ub2M2afHxY4Q PartitionCount: 10 
> ReplicationFactor: 4 Configs: 
> min.insync.replicas=2,segment.bytes=1073741824,max.message.bytes=3145728,unclean.leader.election.enable=false,retention.bytes=10
> Topic: foo.test Partition: 0 Leader: 2 Replicas: 2,3,4,5 Isr: 2,3,5
> Topic: foo.test Partition: 1 Leader: 3 Replicas: 3,4,5,6 Isr: 3,5,6,4
> Topic: foo.test Partition: 2 Leader: 5 Replicas: 5,4,6,1 Isr: 5,6,1,4
> Topic: foo.test Partition: 3 Leader: 5 Replicas: 5,6,1,2 Isr: 5,6,1,2
> Topic: foo.test Partition: 4 Leader: 6 Replicas: 6,1,2,3 Isr: 6,1,2,3
> Topic: foo.test Partition: 5 Leader: 1 Replicas: 1,6,2,5 Isr: 1,6,2,5
> Topic: foo.test Partition: 6 Leader: 6 Replicas: 6,2,5,4 Isr: 6,2,5,4
> Topic: foo.test Partition: 7 Leader: 2 Replicas: 2,5,4,3 Isr: 2,5,3
> Topic: foo.test Partition: 8 Leader: 5 Replicas: 5,4,3,1 Isr: 5,3,1
> Topic: foo.test Partition: 9 Leader: 3 Replicas: 3,4,1,6 Isr: 3,1,6{code}
> Here’s a sample of broker logs
>  
> {code:java}
> {"timestamp":"2023-01-11T15:19:53,620Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
>  log for partition foo.test-9 in 
> /kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete.","logger":"kafka.log.LogManager"}
> {"timestamp":"2023-01-11T15:19:53,617Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
>  time index 
> /kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/.timeindex.deleted.","logger":"kafka.log.LogSegment"}
> {"timestamp":"2023-01-11T15:19:53,617Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
>  offset index 
> /kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/.index.deleted.","logger":"kafka.log.LogSegment"}
> 

[jira] [Commented] (KAFKA-14616) Topic recreation with offline broker causes permanent URPs

2024-02-09 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816034#comment-17816034
 ] 

Omnia Ibrahim commented on KAFKA-14616:
---

I'll mark this as resolved as [~cmccabe] committed the pr. 

> Topic recreation with offline broker causes permanent URPs
> --
>
> Key: KAFKA-14616
> URL: https://issues.apache.org/jira/browse/KAFKA-14616
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.3.1
>Reporter: Omnia Ibrahim
>Priority: Major
>
> We are facing an odd situation when we delete and recreate a topic while 
> broker is offline in KRAFT mode. 
> Here’s what we saw step by step
>  # Created topic {{foo.test}} with 10 partitions and 4 replicas — Topic 
> {{foo.test}} was created with topic ID {{MfuZbwdmSMaiSa0g6__TPg}}
>  # Took broker 4 offline — which held replicas for partitions  {{0, 3, 4, 5, 
> 7, 8, 9}}
>  # Deleted topic {{foo.test}} — The deletion process was successful, despite 
> the fact that broker 4 still held replicas for partitions {{0, 3, 4, 5, 7, 8, 
> 9}} on local disk.
>  # Recreated topic {{foo.test}} with 10 partitions and 4 replicas. — Topic 
> {{foo.test}} was created with topic ID {{RzalpqQ9Q7ub2M2afHxY4Q}} and 
> partitions {{0, 1, 2, 7, 8, 9}} got assigned to broker 4 (which was still 
> offline). Notice here that partitions {{0, 7, 8, 9}} are common between the 
> assignment of the deleted topic ({{{}topic_id: MfuZbwdmSMaiSa0g6__TPg{}}}) 
> and the recreated topic ({{{}topic_id: RzalpqQ9Q7ub2M2afHxY4Q{}}}).
>  # Brough broker 4 back online.
>  # Broker started to create new partition replicas for the recreated topic 
> {{foo.test}} ({{{}topic_id: RzalpqQ9Q7ub2M2afHxY4Q{}}})
>  # The broker hit the following error {{Tried to assign topic ID 
> RzalpqQ9Q7ub2M2afHxY4Q to log for topic partition foo.test-9,but log already 
> contained topic ID MfuZbwdmSMaiSa0g6__TPg}} . As a result of this error the 
> broker decided to rename log dir for partitions {{0, 3, 4, 5, 7, 8, 9}} to 
> {{{}-.-delete{}}}.
>  # Ran {{ls }}
> {code:java}
> foo.test-0.658f87fb9a2e42a590b5d7dcc28862b5-delete/
> foo.test-1/
> foo.test-2/
> foo.test-3.a68f05d05bcc4e579087551b539af311-delete/
> foo.test-4.79ce30a5310d4950ad1b28f226f74895-delete/
> foo.test-5.76ed04da75bf46c3a63342be1eb44450-delete/
> foo.test-6/
> foo.test-7.c2d33db3bf844e9ebbcd9ef22f5270da-delete/
> foo.test-8.33836969ac714b41b69b5334a5068ce0-delete/
> foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/{code}
>       9. Waited until the deletion of the old topic was done and ran {{ls 
> }} again, now we were expecting to see log dir for partitions 
> {{0, 1, 2, 7, 8, 9}} however the result is:
> {code:java}
> foo.test-1/
> foo.test-2/
> foo.test-6/{code}
>      10. Ran {{kafka-topics.sh --command-config cmd.properties 
> --bootstrap-server  --describe --topic foo.test}}
> {code:java}
> Topic: foo.test TopicId: RzalpqQ9Q7ub2M2afHxY4Q PartitionCount: 10 
> ReplicationFactor: 4 Configs: 
> min.insync.replicas=2,segment.bytes=1073741824,max.message.bytes=3145728,unclean.leader.election.enable=false,retention.bytes=10
> Topic: foo.test Partition: 0 Leader: 2 Replicas: 2,3,4,5 Isr: 2,3,5
> Topic: foo.test Partition: 1 Leader: 3 Replicas: 3,4,5,6 Isr: 3,5,6,4
> Topic: foo.test Partition: 2 Leader: 5 Replicas: 5,4,6,1 Isr: 5,6,1,4
> Topic: foo.test Partition: 3 Leader: 5 Replicas: 5,6,1,2 Isr: 5,6,1,2
> Topic: foo.test Partition: 4 Leader: 6 Replicas: 6,1,2,3 Isr: 6,1,2,3
> Topic: foo.test Partition: 5 Leader: 1 Replicas: 1,6,2,5 Isr: 1,6,2,5
> Topic: foo.test Partition: 6 Leader: 6 Replicas: 6,2,5,4 Isr: 6,2,5,4
> Topic: foo.test Partition: 7 Leader: 2 Replicas: 2,5,4,3 Isr: 2,5,3
> Topic: foo.test Partition: 8 Leader: 5 Replicas: 5,4,3,1 Isr: 5,3,1
> Topic: foo.test Partition: 9 Leader: 3 Replicas: 3,4,1,6 Isr: 3,1,6{code}
> Here’s a sample of broker logs
>  
> {code:java}
> {"timestamp":"2023-01-11T15:19:53,620Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
>  log for partition foo.test-9 in 
> /kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete.","logger":"kafka.log.LogManager"}
> {"timestamp":"2023-01-11T15:19:53,617Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
>  time index 
> /kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/.timeindex.deleted.","logger":"kafka.log.LogSegment"}
> {"timestamp":"2023-01-11T15:19:53,617Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
>  offset index 
> /kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/.index.deleted.","logger":"kafka.log.LogSegment"}
> {"timestamp":"2023-01-11T15:19:53,615Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
>  log 
> 

[jira] [Assigned] (KAFKA-16212) Cache partitions by TopicIdPartition instead of TopicPartition

2024-01-31 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-16212:
-

Assignee: Omnia Ibrahim

> Cache partitions by TopicIdPartition instead of TopicPartition
> --
>
> Key: KAFKA-16212
> URL: https://issues.apache.org/jira/browse/KAFKA-16212
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.7.0
>Reporter: Gaurav Narula
>Assignee: Omnia Ibrahim
>Priority: Major
>
> From the discussion in [PR 
> 15263|https://github.com/apache/kafka/pull/15263#discussion_r1471075201], it 
> would be better to cache {{allPartitions}} by {{TopicIdPartition}} instead of 
> {{TopicPartition}} to avoid ambiguity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15853) Move KafkaConfig to server module

2023-12-12 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17795820#comment-17795820
 ] 

Omnia Ibrahim commented on KAFKA-15853:
---

Another thing we depend on here is `KafkaZkClient`(needed for 
DynamicBrokerConfig), `Broker`, `LogManager`, etc. I think we also need to have 
intrafaces that cover these as well so we avoid depending on core from server. 
My only concerns here is that any thing involved because of `kafka.zk` will be 
dropped anyway in 4.0 so not sure how much effort should be put into these. 
[~ijuma] any thoughts?

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15858) Broker stays fenced until all assignments are correct

2023-12-07 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim resolved KAFKA-15858.
---
Resolution: Won't Fix

`BrokerHeartbeatManager.calculateNextBrokerState` already keeps the broker 
fenced (even if the broker asked to be unfenced) if it didn't catch up with the 
metadata.

> Broker stays fenced until all assignments are correct
> -
>
> Key: KAFKA-15858
> URL: https://issues.apache.org/jira/browse/KAFKA-15858
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Assignee: Omnia Ibrahim
>Priority: Major
>
> Until there the broker has caught up with metadata AND corrected any 
> incorrect directory assignments, it should continue to want to stay fenced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15853) Move KafkaConfig to server module

2023-12-06 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793645#comment-17793645
 ] 

Omnia Ibrahim commented on KAFKA-15853:
---

[~fvaleri] Am still working on it it's nearly there (maybe in a week/week and 
half). It's just not small work as KafkaConfig depends on too many other 
classes that need to be moved as well at the same time. Will have a look into 
your PR and let you know when KafkaConfig pr is ready for review. 

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15858) Broker stays fenced until all assignments are correct

2023-11-30 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15858:
--
Attachment: image.png

> Broker stays fenced until all assignments are correct
> -
>
> Key: KAFKA-15858
> URL: https://issues.apache.org/jira/browse/KAFKA-15858
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Assignee: Omnia Ibrahim
>Priority: Major
> Attachments: image.png
>
>
> Until there the broker has caught up with metadata AND corrected any 
> incorrect directory assignments, it should continue to want to stay fenced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15858) Broker stays fenced until all assignments are correct

2023-11-30 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15858:
--
Attachment: (was: image.png)

> Broker stays fenced until all assignments are correct
> -
>
> Key: KAFKA-15858
> URL: https://issues.apache.org/jira/browse/KAFKA-15858
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Assignee: Omnia Ibrahim
>Priority: Major
>
> Until there the broker has caught up with metadata AND corrected any 
> incorrect directory assignments, it should continue to want to stay fenced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15853) Move KafkaConfig to server module

2023-11-25 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789703#comment-17789703
 ] 

Omnia Ibrahim edited comment on KAFKA-15853 at 11/25/23 6:12 PM:
-

As KafkaConfig depends on many object compinanion of other Configs, Managers or 
other classes am thinking of moving only the object compinon of these at least 
to
 - OffsetConfig will move to group-coordinator
 - TransactionLog and TransactionStateManager to group-coordinator under 
transaction package
 - DynamicBrokerConfig to server -> may need a seprate Jira ticket or update 
this one to include move all KafkaConfig dependecies including 
`DynamicBrokerConfig`. The only challange here that DynamicBrokerConfig depends 
on LogCleaner object compinanion, DynamicLogConfig, DynamicThreadPool, 
DynamicListenerConfig, ProducerStateManagerConfig and DynamicRemoteLogConfig 
which all will need to move as well.
 - ReplicationQuotaManagerConfig to server -> Currently I will move it out of 
ReplicationQuotaManager.scala to its own class in server module
 - ClientQuotaManagerConfig to Server  -> Same as ReplicationQuotaManagerConfig 
will move it out of ClientQuotaManager.scala to its own class in server
 - kafka.server.KafkaRaftServer.\{BrokerRole, ControllerRole, ProcessRole} -> 
Move these objects of ProcessRole to server module or raft module (I think 
server is more fit).

[~ijuma] thoughts?


was (Author: omnia_h_ibrahim):
As KafkaConfig depends on many object compinanion of other Configs, Managers or 
other classes am thinking of moving only the object compinon of these at least 
to
 - OffsetConfig will move to group-coordinator
 - TransactionLog and TransactionStateManager to group-coordinator under 
transaction package
 - DynamicBrokerConfig to server -> may need a seprate Jira ticket or update 
this one to include move all KafkaConfig dependecies including 
`DynamicBrokerConfig`
 - ReplicationQuotaManagerConfig to server -> Currently I will move it out of 
ReplicationQuotaManager.scala to its own class in server module
 - ClientQuotaManagerConfig to Server  -> Same as ReplicationQuotaManagerConfig 
will move it out of ClientQuotaManager.scala to its own class in server
 - kafka.server.KafkaRaftServer.\{BrokerRole, ControllerRole, ProcessRole} -> 
Move these objects of ProcessRole to server module or raft module (I think 
server is more fit).

[~ijuma] thoughts?

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15853) Move KafkaConfig to server module

2023-11-25 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789703#comment-17789703
 ] 

Omnia Ibrahim edited comment on KAFKA-15853 at 11/25/23 6:10 PM:
-

As KafkaConfig depends on many object compinanion of other Configs, Managers or 
other classes am thinking of moving only the object compinon of these at least 
to
 - OffsetConfig will move to group-coordinator
 - TransactionLog and TransactionStateManager to group-coordinator under 
transaction package
 - DynamicBrokerConfig to server -> may need a seprate Jira ticket or update 
this one to include move all KafkaConfig dependecies including 
`DynamicBrokerConfig`
 - ReplicationQuotaManagerConfig to server -> Currently I will move it out of 
ReplicationQuotaManager.scala to its own class in server module
 - ClientQuotaManagerConfig to Server  -> Same as ReplicationQuotaManagerConfig 
will move it out of ClientQuotaManager.scala to its own class in server
 - kafka.server.KafkaRaftServer.\{BrokerRole, ControllerRole, ProcessRole} -> 
Move these objects of ProcessRole to server module or raft module (I think 
server is more fit).

[~ijuma] thoughts?


was (Author: omnia_h_ibrahim):
As KafkaConfig depends on many object compinanion of other Configs, Managers or 
other classes am thinking of moving only the object compinon of these at least 
to 
- OffsetConfig will move to group-coordinator
- TransactionLog and TransactionStateManager to group-coordinator under 
transaction package
- DynamicBrokerConfig to server -> Need Jira ticket (am mostly done with this 
work)
- ReplicationQuotaManagerConfig to server -> Currently I will move it out of 
ReplicationQuotaManager.scala to its own class in server module
- ClientQuotaManagerConfig to Server  -> Same as ReplicationQuotaManagerConfig 
will move it out of ClientQuotaManager.scala to its own class in server
- kafka.server.KafkaRaftServer.\{BrokerRole, ControllerRole, ProcessRole} -> 
Move these objects of ProcessRole to server module or raft module (I think 
server is more fit). 

[~ijuma] thoughts?

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15895) Move DynamicBrokerConfig to server module

2023-11-25 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-15895:
-

 Summary: Move DynamicBrokerConfig to server module
 Key: KAFKA-15895
 URL: https://issues.apache.org/jira/browse/KAFKA-15895
 Project: Kafka
  Issue Type: Sub-task
Reporter: Omnia Ibrahim
Assignee: Omnia Ibrahim






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15853) Move KafkaConfig to server module

2023-11-25 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789703#comment-17789703
 ] 

Omnia Ibrahim commented on KAFKA-15853:
---

As KafkaConfig depends on many object compinanion of other Configs, Managers or 
other classes am thinking of moving only the object compinon of these at least 
to 
- OffsetConfig will move to group-coordinator
- TransactionLog and TransactionStateManager to group-coordinator under 
transaction package
- DynamicBrokerConfig to server -> Need Jira ticket (am mostly done with this 
work)
- ReplicationQuotaManagerConfig to server -> Currently I will move it out of 
ReplicationQuotaManager.scala to its own class in server module
- ClientQuotaManagerConfig to Server  -> Same as ReplicationQuotaManagerConfig 
will move it out of ClientQuotaManager.scala to its own class in server
- kafka.server.KafkaRaftServer.\{BrokerRole, ControllerRole, ProcessRole} -> 
Move these objects of ProcessRole to server module or raft module (I think 
server is more fit). 

[~ijuma] thoughts?

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14527) Move `kafka.security` from `core` to separate module

2023-11-25 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789700#comment-17789700
 ] 

Omnia Ibrahim commented on KAFKA-14527:
---

I think this makes more sense. Thanks, [~ijuma]. Please move it and assign it 
to me.

> Move `kafka.security` from `core` to separate module
> 
>
> Key: KAFKA-14527
> URL: https://issues.apache.org/jira/browse/KAFKA-14527
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> A possible module name would be `server-security`. We should consider moving 
> `StandardAuthorizer` and `org.apache.kafka.server.authorizer.Authorizer` to 
> this module too.
> See KAFKA-14524 for more context.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15853) Move KafkaConfig to server module

2023-11-20 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787998#comment-17787998
 ] 

Omnia Ibrahim commented on KAFKA-15853:
---

I assigned it to myself. One note; `KafkaConfig` depends on default values of 
other configs like `OffsetConfig` and `TransactionLog`.  I'll move the defaults 
for these configs out of the core and into `group-coordinator` for now as I 
can't see any work scheduled/planned for moving `transaction` out of the core 
so not sure if `group-coordinator` is the right place for these or not. If not 
maybe we need to move all default config values to an extra new module or to 
`server-common`. WDYT?

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15853) Move KafkaConfig to server module

2023-11-20 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15853:
-

Assignee: Omnia Ibrahim

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15853) Move KafkaConfig to server module

2023-11-19 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787672#comment-17787672
 ] 

Omnia Ibrahim commented on KAFKA-15853:
---

[~ijuma] any plans when will this be landing as KAFKA-14527 depends on it? if 
you are busy am happy to take this ticket.

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15853) Move KafkaConfig to server module

2023-11-19 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15853:
-

Assignee: (was: Omnia Ibrahim)

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15853) Move KafkaConfig to server module

2023-11-19 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15853:
-

Assignee: Omnia Ibrahim

> Move KafkaConfig to server module
> -
>
> Key: KAFKA-15853
> URL: https://issues.apache.org/jira/browse/KAFKA-15853
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The server module is a Java-only module, so this also requires converting 
> from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15365) Broker-side replica management changes

2023-11-17 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15365:
-

Assignee: Omnia Ibrahim

> Broker-side replica management changes
> --
>
> Key: KAFKA-15365
> URL: https://issues.apache.org/jira/browse/KAFKA-15365
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Assignee: Omnia Ibrahim
>Priority: Major
>
> On the broker side, process metadata changes to partition directories as the 
> broker catches up to metadata, as described in KIP-858 under "Replica 
> management".
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15363) Broker log directory failure changes

2023-10-25 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15363:
-

Assignee: Omnia Ibrahim

> Broker log directory failure changes
> 
>
> Key: KAFKA-15363
> URL: https://issues.apache.org/jira/browse/KAFKA-15363
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15364) Handle log directory failure in the Controller

2023-10-25 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15364:
-

Assignee: (was: Omnia Ibrahim)

> Handle log directory failure in the Controller
> --
>
> Key: KAFKA-15364
> URL: https://issues.apache.org/jira/browse/KAFKA-15364
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15364) Handle log directory failure in the Controller

2023-10-25 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15364:
-

Assignee: Omnia Ibrahim

> Handle log directory failure in the Controller
> --
>
> Key: KAFKA-15364
> URL: https://issues.apache.org/jira/browse/KAFKA-15364
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15465) MM2 not working when its internal topics are pre-created on a cluster that disallows topic creation

2023-10-11 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15465:
--
Fix Version/s: 3.6.1

> MM2 not working when its internal topics are pre-created on a cluster that 
> disallows topic creation
> ---
>
> Key: KAFKA-15465
> URL: https://issues.apache.org/jira/browse/KAFKA-15465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.4.1
>Reporter: Ahmed HIBOT
>Assignee: Omnia Ibrahim
>Priority: Major
> Fix For: 3.5.2, 3.7.0, 3.6.1
>
>
> h1. Replication steps
>  * Setup a source kafka cluster (alias SOURCE) which doesn't allow topic 
> creation to MM2 (therefore it doesn't allow the creation of internal topics)
>  * Create MM2 internal topics in the source kafka cluster
>  * Setup a target kafka cluster (alias TARGET)
>  * Enable one way replication SOURCE->TARGET
> MM2 will attempt to create or find its internal topics on the source cluster 
> but it will fail with the following stack trace
> {code:java}
> {"log_timestamp": "2023-09-13T09:39:25.612+", "log_level": "ERROR", 
> "process_id": 1, "process_name": "mirror-maker", "thread_id": 1, 
> "thread_name": "Scheduler for MirrorSourceConnector-creating upstream 
> offset-syncs topic", "action_name": 
> "org.apache.kafka.connect.mirror.Scheduler", "log_message": "Scheduler for 
> MirrorSourceConnector caught exception in scheduled task: creating upstream 
> offset-syncs topic"}
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic 'mm2-offset-syncs.TARGET.internal'
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:155)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createSinglePartitionCompactedTopic(MirrorUtils.java:161)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceConnector.createOffsetSyncsTopic(MirrorSourceConnector.java:328)
>   at org.apache.kafka.connect.mirror.Scheduler.run(Scheduler.java:93)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.executeThread(Scheduler.java:112)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.lambda$execute$2(Scheduler.java:63)
> [...]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicAuthorizationException: Authorization 
> failed.
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:124)
>   ... 11 more
> Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: 
> Authorization failed. {code}
>  
> h1. Root cause analysis
> The changes introduced by KAFKA-13401 in 
> [{{{}MirrorUtils{}}}|https://github.com/apache/kafka/pull/12577/files#diff-fa8f595307a4ade20cc22253a7721828e3b55c96f778e9c4842c978801e0a1a4]
>  are supposed to follow the same logic as 
> [{{{}TopicAdmin{}}}|https://github.com/apache/kafka/blob/a7e865c0a756504cc7ae6f4eb0772caddc53/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L423]
>  according to the contributor's 
> [comment|https://github.com/apache/kafka/pull/12577#discussion_r991566108]
> {{TopicAdmin.createOrFindTopics(...)}} and 
> {{MirrorUtils.createCompactedTopic(...)}} aren't aligned in terms of allowed 
> exceptions
> ||Exception||TopicAdmin||MirrorUtils||
> |TopicExistsException|OK|OK|
> |UnsupportedVersionException|OK|_KO_|
> |ClusterAuthorizationException|OK|_KO_|
> |TopicAuthorizationException|OK|_KO_|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15478) Update connect to use ForwardingAdmin

2023-09-19 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15478:
--
Description: 
Connect uses AdminClients to create topics; while this simplifies the 
implementation of Connect it has the following problems 
 * It assumes that whoever runs Connect must have admin access to both source 
and destination clusters. This assumption is not necessarily valid all the time.
 * It creates conflict in use-cases where centralised systems or tools manage 
Kafka resources. 

It would be easier if customers could provide how they want to manage Kafka 
topics through admin client or using their centralised system or tools. 

 

We already have ForwardingAdmin in MM2 so we can extend connect to do something 
similiar

 KIP-981 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-981%3A+Manage+Connect+topics+with+custom+implementation+of+Admin]

  was:
Connect uses AdminClients to create topics; while this simplifies the 
implementation of Connect it has the following problems 
 * It assumes that whoever runs Connect must have admin access to both source 
and destination clusters. This assumption is not necessarily valid all the time.
 * It creates conflict in use-cases where centralised systems or tools manage 
Kafka resources. 

It would be easier if customers could provide how they want to manage Kafka 
topics through admin client or using their centralised system or tools. 

 

We already have ForwardingAdmin in MM2 so we can extend connect to do something 
similiar


> Update connect to use ForwardingAdmin
> -
>
> Key: KAFKA-15478
> URL: https://issues.apache.org/jira/browse/KAFKA-15478
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Omnia Ibrahim
>Priority: Major
>  Labels: need-kip
>
> Connect uses AdminClients to create topics; while this simplifies the 
> implementation of Connect it has the following problems 
>  * It assumes that whoever runs Connect must have admin access to both source 
> and destination clusters. This assumption is not necessarily valid all the 
> time.
>  * It creates conflict in use-cases where centralised systems or tools manage 
> Kafka resources. 
> It would be easier if customers could provide how they want to manage Kafka 
> topics through admin client or using their centralised system or tools. 
>  
> We already have ForwardingAdmin in MM2 so we can extend connect to do 
> something similiar
>  KIP-981 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-981%3A+Manage+Connect+topics+with+custom+implementation+of+Admin]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15478) Update connect to use ForwardingAdmin

2023-09-19 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15478:
--
Labels: need-kip  (was: )

> Update connect to use ForwardingAdmin
> -
>
> Key: KAFKA-15478
> URL: https://issues.apache.org/jira/browse/KAFKA-15478
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Omnia Ibrahim
>Priority: Major
>  Labels: need-kip
>
> Connect uses AdminClients to create topics; while this simplifies the 
> implementation of Connect it has the following problems 
>  * It assumes that whoever runs Connect must have admin access to both source 
> and destination clusters. This assumption is not necessarily valid all the 
> time.
>  * It creates conflict in use-cases where centralised systems or tools manage 
> Kafka resources. 
> It would be easier if customers could provide how they want to manage Kafka 
> topics through admin client or using their centralised system or tools. 
>  
> We already have ForwardingAdmin in MM2 so we can extend connect to do 
> something similiar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15478) Update connect to use ForwardingAdmin

2023-09-19 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15478:
--
Description: 
Connect uses AdminClients to create topics; while this simplifies the 
implementation of Connect it has the following problems 
 * It assumes that whoever runs Connect must have admin access to both source 
and destination clusters. This assumption is not necessarily valid all the time.
 * It creates conflict in use-cases where centralised systems or tools manage 
Kafka resources. 

It would be easier if customers could provide how they want to manage Kafka 
topics through admin client or using their centralised system or tools. 

 

We already have ForwardingAdmin in MM2 so we can extend connect to do something 
similiar

  was:
Connect uses AdminClients to create topics; while this simplifies the 
implementation of Connect it has the following problems 
 * It assumes that whoever runs Connect must have admin access to both source 
and destination clusters. This assumption is not necessarily valid all the time.
 * It creates conflict in use-cases where centralised systems or tools manage 
Kafka resources. 

It would be easier if customers could provide how they want to manage Kafka 
topics through admin client or using their centralised system or tools. 

 

We already have


> Update connect to use ForwardingAdmin
> -
>
> Key: KAFKA-15478
> URL: https://issues.apache.org/jira/browse/KAFKA-15478
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Omnia Ibrahim
>Priority: Major
>
> Connect uses AdminClients to create topics; while this simplifies the 
> implementation of Connect it has the following problems 
>  * It assumes that whoever runs Connect must have admin access to both source 
> and destination clusters. This assumption is not necessarily valid all the 
> time.
>  * It creates conflict in use-cases where centralised systems or tools manage 
> Kafka resources. 
> It would be easier if customers could provide how they want to manage Kafka 
> topics through admin client or using their centralised system or tools. 
>  
> We already have ForwardingAdmin in MM2 so we can extend connect to do 
> something similiar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15478) Update connect to use ForwardingAdmin

2023-09-19 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15478:
--
Description: 
Connect uses AdminClients to create topics; while this simplifies the 
implementation of Connect it has the following problems 
 * It assumes that whoever runs Connect must have admin access to both source 
and destination clusters. This assumption is not necessarily valid all the time.
 * It creates conflict in use-cases where centralised systems or tools manage 
Kafka resources. 

It would be easier if customers could provide how they want to manage Kafka 
topics through admin client or using their centralised system or tools. 

 

We already have

  was:
Connect uses AdminClients to create topics; while this simplifies the 
implementation of Connect it has the following problems 
 * It assumes that whoever runs Connect must have admin access to both source 
and destination clusters. This assumption is not necessarily valid all the time.
 * It creates conflict in use-cases where centralised systems or tools manage 
Kafka resources. 

It would be easier if customers could provide how they want to manage Kafka 
topics through admin client or using their centralised system or tools. 


> Update connect to use ForwardingAdmin
> -
>
> Key: KAFKA-15478
> URL: https://issues.apache.org/jira/browse/KAFKA-15478
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Omnia Ibrahim
>Priority: Major
>
> Connect uses AdminClients to create topics; while this simplifies the 
> implementation of Connect it has the following problems 
>  * It assumes that whoever runs Connect must have admin access to both source 
> and destination clusters. This assumption is not necessarily valid all the 
> time.
>  * It creates conflict in use-cases where centralised systems or tools manage 
> Kafka resources. 
> It would be easier if customers could provide how they want to manage Kafka 
> topics through admin client or using their centralised system or tools. 
>  
> We already have



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15478) Update connect to use ForwardingAdmin

2023-09-19 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-15478:
-

 Summary: Update connect to use ForwardingAdmin
 Key: KAFKA-15478
 URL: https://issues.apache.org/jira/browse/KAFKA-15478
 Project: Kafka
  Issue Type: New Feature
Reporter: Omnia Ibrahim


Connect uses AdminClients to create topics; while this simplifies the 
implementation of Connect it has the following problems 
 * It assumes that whoever runs Connect must have admin access to both source 
and destination clusters. This assumption is not necessarily valid all the time.
 * It creates conflict in use-cases where centralised systems or tools manage 
Kafka resources. 

It would be easier if customers could provide how they want to manage Kafka 
topics through admin client or using their centralised system or tools. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15465) MM2 not working when its internal topics are pre-created on a cluster that disallows topic creation

2023-09-14 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15465:
--
Fix Version/s: 3.5.2
   (was: 3.7.0)

> MM2 not working when its internal topics are pre-created on a cluster that 
> disallows topic creation
> ---
>
> Key: KAFKA-15465
> URL: https://issues.apache.org/jira/browse/KAFKA-15465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.4.1
>Reporter: Ahmed HIBOT
>Assignee: Omnia Ibrahim
>Priority: Major
> Fix For: 3.6.0, 3.5.2
>
>
> h1. Replication steps
>  * Setup a source kafka cluster (alias SOURCE) which doesn't allow topic 
> creation to MM2 (therefore it doesn't allow the creation of internal topics)
>  * Create MM2 internal topics in the source kafka cluster
>  * Setup a target kafka cluster (alias TARGET)
>  * Enable one way replication SOURCE->TARGET
> MM2 will attempt to create or find its internal topics on the source cluster 
> but it will fail with the following stack trace
> {code:java}
> {"log_timestamp": "2023-09-13T09:39:25.612+", "log_level": "ERROR", 
> "process_id": 1, "process_name": "mirror-maker", "thread_id": 1, 
> "thread_name": "Scheduler for MirrorSourceConnector-creating upstream 
> offset-syncs topic", "action_name": 
> "org.apache.kafka.connect.mirror.Scheduler", "log_message": "Scheduler for 
> MirrorSourceConnector caught exception in scheduled task: creating upstream 
> offset-syncs topic"}
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic 'mm2-offset-syncs.TARGET.internal'
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:155)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createSinglePartitionCompactedTopic(MirrorUtils.java:161)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceConnector.createOffsetSyncsTopic(MirrorSourceConnector.java:328)
>   at org.apache.kafka.connect.mirror.Scheduler.run(Scheduler.java:93)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.executeThread(Scheduler.java:112)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.lambda$execute$2(Scheduler.java:63)
> [...]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicAuthorizationException: Authorization 
> failed.
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:124)
>   ... 11 more
> Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: 
> Authorization failed. {code}
>  
> h1. Root cause analysis
> The changes introduced by KAFKA-13401 in 
> [{{{}MirrorUtils{}}}|https://github.com/apache/kafka/pull/12577/files#diff-fa8f595307a4ade20cc22253a7721828e3b55c96f778e9c4842c978801e0a1a4]
>  are supposed to follow the same logic as 
> [{{{}TopicAdmin{}}}|https://github.com/apache/kafka/blob/a7e865c0a756504cc7ae6f4eb0772caddc53/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L423]
>  according to the contributor's 
> [comment|https://github.com/apache/kafka/pull/12577#discussion_r991566108]
> {{TopicAdmin.createOrFindTopics(...)}} and 
> {{MirrorUtils.createCompactedTopic(...)}} aren't aligned in terms of allowed 
> exceptions
> ||Exception||TopicAdmin||MirrorUtils||
> |TopicExistsException|OK|OK|
> |UnsupportedVersionException|OK|_KO_|
> |ClusterAuthorizationException|OK|_KO_|
> |TopicAuthorizationException|OK|_KO_|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (KAFKA-15465) MM2 not working when its internal topics are pre-created on a cluster that disallows topic creation

2023-09-14 Thread Omnia Ibrahim (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-15465 ]


Omnia Ibrahim deleted comment on KAFKA-15465:
---

was (Author: omnia_h_ibrahim):
One thing I noticed is the test for `{{{}MirrorUtils.createCompactedTopic{}}}` 
[https://github.com/apache/kafka/blob/2a41beb0f49f947cfa7dfd99101c8b1ba89842cb/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorUtilsTest.java#L74]
 which is writen while ago assume that we should fails on 
`ClusterAuthorizationException`. If I recall corectly this why I didn't allow 
these exceptions and the test didn't catch the problem you are mentioning.

> MM2 not working when its internal topics are pre-created on a cluster that 
> disallows topic creation
> ---
>
> Key: KAFKA-15465
> URL: https://issues.apache.org/jira/browse/KAFKA-15465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.4.1
>Reporter: Ahmed HIBOT
>Assignee: Omnia Ibrahim
>Priority: Major
>
> h1. Replication steps
>  * Setup a source kafka cluster (alias SOURCE) which doesn't allow topic 
> creation to MM2 (therefore it doesn't allow the creation of internal topics)
>  * Create MM2 internal topics in the source kafka cluster
>  * Setup a target kafka cluster (alias TARGET)
>  * Enable one way replication SOURCE->TARGET
> MM2 will attempt to create or find its internal topics on the source cluster 
> but it will fail with the following stack trace
> {code:java}
> {"log_timestamp": "2023-09-13T09:39:25.612+", "log_level": "ERROR", 
> "process_id": 1, "process_name": "mirror-maker", "thread_id": 1, 
> "thread_name": "Scheduler for MirrorSourceConnector-creating upstream 
> offset-syncs topic", "action_name": 
> "org.apache.kafka.connect.mirror.Scheduler", "log_message": "Scheduler for 
> MirrorSourceConnector caught exception in scheduled task: creating upstream 
> offset-syncs topic"}
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic 'mm2-offset-syncs.TARGET.internal'
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:155)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createSinglePartitionCompactedTopic(MirrorUtils.java:161)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceConnector.createOffsetSyncsTopic(MirrorSourceConnector.java:328)
>   at org.apache.kafka.connect.mirror.Scheduler.run(Scheduler.java:93)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.executeThread(Scheduler.java:112)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.lambda$execute$2(Scheduler.java:63)
> [...]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicAuthorizationException: Authorization 
> failed.
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:124)
>   ... 11 more
> Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: 
> Authorization failed. {code}
>  
> h1. Root cause analysis
> The changes introduced by KAFKA-13401 in 
> [{{{}MirrorUtils{}}}|https://github.com/apache/kafka/pull/12577/files#diff-fa8f595307a4ade20cc22253a7721828e3b55c96f778e9c4842c978801e0a1a4]
>  are supposed to follow the same logic as 
> [{{{}TopicAdmin{}}}|https://github.com/apache/kafka/blob/a7e865c0a756504cc7ae6f4eb0772caddc53/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L423]
>  according to the contributor's 
> [comment|https://github.com/apache/kafka/pull/12577#discussion_r991566108]
> {{TopicAdmin.createOrFindTopics(...)}} and 
> {{MirrorUtils.createCompactedTopic(...)}} aren't aligned in terms of allowed 
> exceptions
> ||Exception||TopicAdmin||MirrorUtils||
> |TopicExistsException|OK|OK|
> |UnsupportedVersionException|OK|_KO_|
> |ClusterAuthorizationException|OK|_KO_|
> |TopicAuthorizationException|OK|_KO_|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15465) MM2 not working when its internal topics are pre-created on a cluster that disallows topic creation

2023-09-14 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17765103#comment-17765103
 ] 

Omnia Ibrahim edited comment on KAFKA-15465 at 9/14/23 10:59 AM:
-

One thing I noticed is the test for `{{{}MirrorUtils.createCompactedTopic{}}}` 
[https://github.com/apache/kafka/blob/2a41beb0f49f947cfa7dfd99101c8b1ba89842cb/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorUtilsTest.java#L74]
 which is writen while ago assume that we should fails on 
`ClusterAuthorizationException`. If I recall corectly this why I didn't allow 
these exceptions and the test didn't catch the problem you are mentioning.


was (Author: omnia_h_ibrahim):
One thing I noticed is the test for `{{{}MirrorUtils.createCompactedTopic{}}}` 
[https://github.com/apache/kafka/blob/2a41beb0f49f947cfa7dfd99101c8b1ba89842cb/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorUtilsTest.java#L74]
 which is writen while ago assume that we should fails on 
`ClusterAuthorizationException`

> MM2 not working when its internal topics are pre-created on a cluster that 
> disallows topic creation
> ---
>
> Key: KAFKA-15465
> URL: https://issues.apache.org/jira/browse/KAFKA-15465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.4.1
>Reporter: Ahmed HIBOT
>Assignee: Omnia Ibrahim
>Priority: Major
>
> h1. Replication steps
>  * Setup a source kafka cluster (alias SOURCE) which doesn't allow topic 
> creation to MM2 (therefore it doesn't allow the creation of internal topics)
>  * Create MM2 internal topics in the source kafka cluster
>  * Setup a target kafka cluster (alias TARGET)
>  * Enable one way replication SOURCE->TARGET
> MM2 will attempt to create or find its internal topics on the source cluster 
> but it will fail with the following stack trace
> {code:java}
> {"log_timestamp": "2023-09-13T09:39:25.612+", "log_level": "ERROR", 
> "process_id": 1, "process_name": "mirror-maker", "thread_id": 1, 
> "thread_name": "Scheduler for MirrorSourceConnector-creating upstream 
> offset-syncs topic", "action_name": 
> "org.apache.kafka.connect.mirror.Scheduler", "log_message": "Scheduler for 
> MirrorSourceConnector caught exception in scheduled task: creating upstream 
> offset-syncs topic"}
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic 'mm2-offset-syncs.TARGET.internal'
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:155)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createSinglePartitionCompactedTopic(MirrorUtils.java:161)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceConnector.createOffsetSyncsTopic(MirrorSourceConnector.java:328)
>   at org.apache.kafka.connect.mirror.Scheduler.run(Scheduler.java:93)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.executeThread(Scheduler.java:112)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.lambda$execute$2(Scheduler.java:63)
> [...]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicAuthorizationException: Authorization 
> failed.
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:124)
>   ... 11 more
> Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: 
> Authorization failed. {code}
>  
> h1. Root cause analysis
> The changes introduced by KAFKA-13401 in 
> [{{{}MirrorUtils{}}}|https://github.com/apache/kafka/pull/12577/files#diff-fa8f595307a4ade20cc22253a7721828e3b55c96f778e9c4842c978801e0a1a4]
>  are supposed to follow the same logic as 
> [{{{}TopicAdmin{}}}|https://github.com/apache/kafka/blob/a7e865c0a756504cc7ae6f4eb0772caddc53/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L423]
>  according to the contributor's 
> [comment|https://github.com/apache/kafka/pull/12577#discussion_r991566108]
> {{TopicAdmin.createOrFindTopics(...)}} and 
> {{MirrorUtils.createCompactedTopic(...)}} aren't aligned in terms of allowed 
> exceptions
> ||Exception||TopicAdmin||MirrorUtils||
> |TopicExistsException|OK|OK|
> |UnsupportedVersionException|OK|_KO_|
> |ClusterAuthorizationException|OK|_KO_|
> |TopicAuthorizationException|OK|_KO_|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15465) MM2 not working when its internal topics are pre-created on a cluster that disallows topic creation

2023-09-14 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17765103#comment-17765103
 ] 

Omnia Ibrahim commented on KAFKA-15465:
---

One thing I noticed is the test for `{{{}MirrorUtils.createCompactedTopic{}}}` 
[https://github.com/apache/kafka/blob/2a41beb0f49f947cfa7dfd99101c8b1ba89842cb/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorUtilsTest.java#L74]
 which is writen while ago assume that we should fails on 
`ClusterAuthorizationException`

> MM2 not working when its internal topics are pre-created on a cluster that 
> disallows topic creation
> ---
>
> Key: KAFKA-15465
> URL: https://issues.apache.org/jira/browse/KAFKA-15465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.4.1
>Reporter: Ahmed HIBOT
>Assignee: Omnia Ibrahim
>Priority: Major
>
> h1. Replication steps
>  * Setup a source kafka cluster (alias SOURCE) which doesn't allow topic 
> creation to MM2 (therefore it doesn't allow the creation of internal topics)
>  * Create MM2 internal topics in the source kafka cluster
>  * Setup a target kafka cluster (alias TARGET)
>  * Enable one way replication SOURCE->TARGET
> MM2 will attempt to create or find its internal topics on the source cluster 
> but it will fail with the following stack trace
> {code:java}
> {"log_timestamp": "2023-09-13T09:39:25.612+", "log_level": "ERROR", 
> "process_id": 1, "process_name": "mirror-maker", "thread_id": 1, 
> "thread_name": "Scheduler for MirrorSourceConnector-creating upstream 
> offset-syncs topic", "action_name": 
> "org.apache.kafka.connect.mirror.Scheduler", "log_message": "Scheduler for 
> MirrorSourceConnector caught exception in scheduled task: creating upstream 
> offset-syncs topic"}
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic 'mm2-offset-syncs.TARGET.internal'
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:155)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createSinglePartitionCompactedTopic(MirrorUtils.java:161)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceConnector.createOffsetSyncsTopic(MirrorSourceConnector.java:328)
>   at org.apache.kafka.connect.mirror.Scheduler.run(Scheduler.java:93)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.executeThread(Scheduler.java:112)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.lambda$execute$2(Scheduler.java:63)
> [...]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicAuthorizationException: Authorization 
> failed.
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:124)
>   ... 11 more
> Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: 
> Authorization failed. {code}
>  
> h1. Root cause analysis
> The changes introduced by KAFKA-13401 in 
> [{{{}MirrorUtils{}}}|https://github.com/apache/kafka/pull/12577/files#diff-fa8f595307a4ade20cc22253a7721828e3b55c96f778e9c4842c978801e0a1a4]
>  are supposed to follow the same logic as 
> [{{{}TopicAdmin{}}}|https://github.com/apache/kafka/blob/a7e865c0a756504cc7ae6f4eb0772caddc53/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L423]
>  according to the contributor's 
> [comment|https://github.com/apache/kafka/pull/12577#discussion_r991566108]
> {{TopicAdmin.createOrFindTopics(...)}} and 
> {{MirrorUtils.createCompactedTopic(...)}} aren't aligned in terms of allowed 
> exceptions
> ||Exception||TopicAdmin||MirrorUtils||
> |TopicExistsException|OK|OK|
> |UnsupportedVersionException|OK|_KO_|
> |ClusterAuthorizationException|OK|_KO_|
> |TopicAuthorizationException|OK|_KO_|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15465) MM2 not working when its internal topics are pre-created on a cluster that disallows topic creation

2023-09-14 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-15465:
-

Assignee: Omnia Ibrahim

> MM2 not working when its internal topics are pre-created on a cluster that 
> disallows topic creation
> ---
>
> Key: KAFKA-15465
> URL: https://issues.apache.org/jira/browse/KAFKA-15465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.4.1
>Reporter: Ahmed HIBOT
>Assignee: Omnia Ibrahim
>Priority: Major
>
> h1. Replication steps
>  * Setup a source kafka cluster (alias SOURCE) which doesn't allow topic 
> creation to MM2 (therefore it doesn't allow the creation of internal topics)
>  * Create MM2 internal topics in the source kafka cluster
>  * Setup a target kafka cluster (alias TARGET)
>  * Enable one way replication SOURCE->TARGET
> MM2 will attempt to create or find its internal topics on the source cluster 
> but it will fail with the following stack trace
> {code:java}
> {"log_timestamp": "2023-09-13T09:39:25.612+", "log_level": "ERROR", 
> "process_id": 1, "process_name": "mirror-maker", "thread_id": 1, 
> "thread_name": "Scheduler for MirrorSourceConnector-creating upstream 
> offset-syncs topic", "action_name": 
> "org.apache.kafka.connect.mirror.Scheduler", "log_message": "Scheduler for 
> MirrorSourceConnector caught exception in scheduled task: creating upstream 
> offset-syncs topic"}
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic 'mm2-offset-syncs.TARGET.internal'
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:155)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createSinglePartitionCompactedTopic(MirrorUtils.java:161)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceConnector.createOffsetSyncsTopic(MirrorSourceConnector.java:328)
>   at org.apache.kafka.connect.mirror.Scheduler.run(Scheduler.java:93)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.executeThread(Scheduler.java:112)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.lambda$execute$2(Scheduler.java:63)
> [...]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicAuthorizationException: Authorization 
> failed.
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:124)
>   ... 11 more
> Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: 
> Authorization failed. {code}
>  
> h1. Root cause analysis
> The changes introduced by KAFKA-13401 in 
> [{{{}MirrorUtils{}}}|https://github.com/apache/kafka/pull/12577/files#diff-fa8f595307a4ade20cc22253a7721828e3b55c96f778e9c4842c978801e0a1a4]
>  are supposed to follow the same logic as 
> [{{{}TopicAdmin{}}}|https://github.com/apache/kafka/blob/a7e865c0a756504cc7ae6f4eb0772caddc53/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L423]
>  according to the contributor's 
> [comment|https://github.com/apache/kafka/pull/12577#discussion_r991566108]
> {{TopicAdmin.createOrFindTopics(...)}} and 
> {{MirrorUtils.createCompactedTopic(...)}} aren't aligned in terms of allowed 
> exceptions
> ||Exception||TopicAdmin||MirrorUtils||
> |TopicExistsException|OK|OK|
> |UnsupportedVersionException|OK|_KO_|
> |ClusterAuthorizationException|OK|_KO_|
> |TopicAuthorizationException|OK|_KO_|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15465) MM2 not working when its internal topics are pre-created on a cluster that disallows topic creation

2023-09-14 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17765099#comment-17765099
 ] 

Omnia Ibrahim commented on KAFKA-15465:
---

Hi [~ahibot] ,

I re-checked the code of TopicAdmin.createOrFindTopics and we missed a few 
returns. I'm raising a pr for this shortly. Thanks for reporting this.

> MM2 not working when its internal topics are pre-created on a cluster that 
> disallows topic creation
> ---
>
> Key: KAFKA-15465
> URL: https://issues.apache.org/jira/browse/KAFKA-15465
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.4.1
>Reporter: Ahmed HIBOT
>Priority: Major
>
> h1. Replication steps
>  * Setup a source kafka cluster (alias SOURCE) which doesn't allow topic 
> creation to MM2 (therefore it doesn't allow the creation of internal topics)
>  * Create MM2 internal topics in the source kafka cluster
>  * Setup a target kafka cluster (alias TARGET)
>  * Enable one way replication SOURCE->TARGET
> MM2 will attempt to create or find its internal topics on the source cluster 
> but it will fail with the following stack trace
> {code:java}
> {"log_timestamp": "2023-09-13T09:39:25.612+", "log_level": "ERROR", 
> "process_id": 1, "process_name": "mirror-maker", "thread_id": 1, 
> "thread_name": "Scheduler for MirrorSourceConnector-creating upstream 
> offset-syncs topic", "action_name": 
> "org.apache.kafka.connect.mirror.Scheduler", "log_message": "Scheduler for 
> MirrorSourceConnector caught exception in scheduled task: creating upstream 
> offset-syncs topic"}
> org.apache.kafka.connect.errors.ConnectException: Error while attempting to 
> create/find topic 'mm2-offset-syncs.TARGET.internal'
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:155)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createSinglePartitionCompactedTopic(MirrorUtils.java:161)
>   at 
> org.apache.kafka.connect.mirror.MirrorSourceConnector.createOffsetSyncsTopic(MirrorSourceConnector.java:328)
>   at org.apache.kafka.connect.mirror.Scheduler.run(Scheduler.java:93)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.executeThread(Scheduler.java:112)
>   at 
> org.apache.kafka.connect.mirror.Scheduler.lambda$execute$2(Scheduler.java:63)
> [...]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TopicAuthorizationException: Authorization 
> failed.
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>   at 
> org.apache.kafka.connect.mirror.MirrorUtils.createCompactedTopic(MirrorUtils.java:124)
>   ... 11 more
> Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: 
> Authorization failed. {code}
>  
> h1. Root cause analysis
> The changes introduced by KAFKA-13401 in 
> [{{{}MirrorUtils{}}}|https://github.com/apache/kafka/pull/12577/files#diff-fa8f595307a4ade20cc22253a7721828e3b55c96f778e9c4842c978801e0a1a4]
>  are supposed to follow the same logic as 
> [{{{}TopicAdmin{}}}|https://github.com/apache/kafka/blob/a7e865c0a756504cc7ae6f4eb0772caddc53/connect/runtime/src/main/java/org/apache/kafka/connect/util/TopicAdmin.java#L423]
>  according to the contributor's 
> [comment|https://github.com/apache/kafka/pull/12577#discussion_r991566108]
> {{TopicAdmin.createOrFindTopics(...)}} and 
> {{MirrorUtils.createCompactedTopic(...)}} aren't aligned in terms of allowed 
> exceptions
> ||Exception||TopicAdmin||MirrorUtils||
> |TopicExistsException|OK|OK|
> |UnsupportedVersionException|OK|_KO_|
> |ClusterAuthorizationException|OK|_KO_|
> |TopicAuthorizationException|OK|_KO_|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14589) Move ConsumerGroupCommand to tools

2023-08-17 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755426#comment-17755426
 ] 

Omnia Ibrahim commented on KAFKA-14589:
---

[~nizhikov] sure

> Move ConsumerGroupCommand to tools
> --
>
> Key: KAFKA-14589
> URL: https://issues.apache.org/jira/browse/KAFKA-14589
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14588) Move ConfigCommand to tools

2023-08-17 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-14588:
-

Assignee: Nikolay Izhikov  (was: Omnia Ibrahim)

> Move ConfigCommand to tools
> ---
>
> Key: KAFKA-14588
> URL: https://issues.apache.org/jira/browse/KAFKA-14588
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Nikolay Izhikov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14588) Move ConfigCommand to tools

2023-08-17 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755425#comment-17755425
 ] 

Omnia Ibrahim commented on KAFKA-14588:
---

[~nizhikov] sure go ahead as am focusing on moving kafka.security to move 
AclCommand

> Move ConfigCommand to tools
> ---
>
> Key: KAFKA-14588
> URL: https://issues.apache.org/jira/browse/KAFKA-14588
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-08-02 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750371#comment-17750371
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 8/2/23 3:06 PM:
---

The KIP got accepted and the PR is ready


was (Author: omnia_h_ibrahim):
The KIP is accepted now and the PR is ready

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Assignee: Omnia Ibrahim
>Priority: Major
> Fix For: 3.3, 3.2.4, 3.1.3, 3.6.0, 3.4.2, 3.5.2
>
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-08-02 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750371#comment-17750371
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 8/2/23 3:06 PM:
---

The KIP is accepted now and the PR is ready


was (Author: omnia_h_ibrahim):
The KIP is accepted and the PR is ready

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Assignee: Omnia Ibrahim
>Priority: Major
> Fix For: 3.3, 3.2.4, 3.1.3, 3.6.0, 3.4.2, 3.5.2
>
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-08-02 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750371#comment-17750371
 ] 

Omnia Ibrahim commented on KAFKA-15102:
---

The KIP is accepted and the PR is ready

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Assignee: Omnia Ibrahim
>Priority: Major
> Fix For: 3.3, 3.2.4, 3.1.3, 3.6.0, 3.4.2, 3.5.2
>
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-07-09 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17741402#comment-17741402
 ] 

Omnia Ibrahim commented on KAFKA-15102:
---

I opened this small KIP to fix this 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-949%3A+Add+flag+to+enable+the+usage+of+topic+separator+in+MM2+DefaultReplicationPolicy]

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Assignee: Omnia Ibrahim
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14527) Move `kafka.security` from `core` to separate module

2023-07-09 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-14527:
-

Assignee: Omnia Ibrahim

> Move `kafka.security` from `core` to separate module
> 
>
> Key: KAFKA-14527
> URL: https://issues.apache.org/jira/browse/KAFKA-14527
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Omnia Ibrahim
>Priority: Major
>
> A possible module name would be `server-security`. We should consider moving 
> `StandardAuthorizer` and `org.apache.kafka.server.authorizer.Authorizer` to 
> this module too.
> See KAFKA-14524 for more context.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-07-05 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740252#comment-17740252
 ] 

Omnia Ibrahim commented on KAFKA-15102:
---

[~ChrisEgerton] I'll prepare the KIP this week.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-07-04 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739924#comment-17739924
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 7/4/23 1:29 PM:
---

[~ChrisEgerton] I updated the compatibility section in the KIP with the 
impacted versions and linked to this JIRA. I can open a small KIP to have 
`{{{}replication.policy.internal.topic.separator.enabled` if you don't have 
time to do it. {}}}


was (Author: omnia_h_ibrahim):
[~ChrisEgerton] I updated the compatibility section in the KIP with the 
impacted versions and linked to this JIRA. I can open a small KIP to have 
`{{{}replication.policy.internal.topic.separator.enabled` if you don't have 
time to do it. {}}}

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-07-04 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739924#comment-17739924
 ] 

Omnia Ibrahim commented on KAFKA-15102:
---

[~ChrisEgerton] I updated the compatibility section in the KIP with the 
impacted versions and linked to this JIRA. I can open a small KIP to have 
`{{{}replication.policy.internal.topic.separator.enabled` if you don't have 
time to do it. {}}}

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-06-29 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738744#comment-17738744
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 6/29/23 7:30 PM:


Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option.{}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
users to provide a custom implementation that the override 
`ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.


was (Author: omnia_h_ibrahim):
Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option.{}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
users to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-06-29 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738744#comment-17738744
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 6/29/23 7:01 PM:


Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option.{}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
users to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.


was (Author: omnia_h_ibrahim):
Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option.{}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-06-29 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738744#comment-17738744
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 6/29/23 7:00 PM:


Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.


was (Author: omnia_h_ibrahim):
Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}{{{}We can also update the backward compatibility section in 
KIP-690 to ask customers to override the 
`ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-06-29 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738744#comment-17738744
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 6/29/23 7:00 PM:


Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option.{}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.


was (Author: omnia_h_ibrahim):
Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-06-29 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738744#comment-17738744
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 6/29/23 7:00 PM:


Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}



{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.


was (Author: omnia_h_ibrahim):
Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-06-29 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738744#comment-17738744
 ] 

Omnia Ibrahim edited comment on KAFKA-15102 at 6/29/23 7:00 PM:


Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}

{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.


was (Author: omnia_h_ibrahim):
Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}



{{{}We can also update the backward compatibility section in KIP-690 to ask 
customers to override the `ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15102) Mirror Maker 2 - KIP690 backward compatibility

2023-06-29 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738744#comment-17738744
 ] 

Omnia Ibrahim commented on KAFKA-15102:
---

Thanks, [~ddufour1a] for raising this. The backward compatibility mentioned in 
the KIP accounted only for using the default separator configuration and didn't 
address the usage custom separator (my mistake here). [~ChrisEgerton] I think 
having `{{{}replication.policy.internal.topic.separator.enabled` is a good 
option. {}}}{{{}We can also update the backward compatibility section in 
KIP-690 to ask customers to override the 
`ReplicationPolicy.{}}}offsetSyncsTopic`and 
`ReplicationPolicy.checkpointsTopic` methods to use old topics if they still 
want to use the old internal topics or delete them so they are aware of this 
issue.

> Mirror Maker 2 - KIP690 backward compatibility
> --
>
> Key: KAFKA-15102
> URL: https://issues.apache.org/jira/browse/KAFKA-15102
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.1.0
>Reporter: David Dufour
>Priority: Major
>
> According to KIP690, "When users upgrade an existing MM2 cluster they don’t 
> need to change any of their current configuration as this proposal maintains 
> the default behaviour for MM2."
> Now, the separator is subject to customization.
> As a consequence, when an MM2 upgrade is performed, if the separator was 
> customized with replication.policy.separator, the name of this internal topic 
> changes. It then generates issues like:
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidTopicException: Topic 
> 'mm2-offset-syncs_bkts28_internal' collides with existing topics: 
> mm2-offset-syncs.bkts28.internal
> It has been observed that the replication can then be broken sometimes 
> several days after the upgrade (reason not identified). By deleting the old 
> topic name, it recovers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15063) Throttle number of active PIDs

2023-06-06 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15063:
--
Description: 
{color:#172b4d}Ticket to track KIP-936. Since KIP-679 idempotent{color} 
{color:#172b4d}producers became the default in Kafka as a result of this all 
producer instances will be assigned PID. The increase of the number of PIDs 
stored in Kafka brokers by {color}{{ProducerStateManager}}{color:#172b4d} 
exposes the broker to OOM errors if it has a high number of producers, a rogue 
or misconfigured client(s).{color}

{color:#172b4d}The broker is still exposed to OOM{color} even after KIP-854 
introduced a separate config to expire PID from transaction IDs if there is a 
high number of PID before {{producer.id.expiration.ms}} is exceeded.

As a result of this, the broker will keep experiencing OOM and become offline. 
The only way to recover from this is to increase the heap.  

 

{color:#172b4d}KIP-936 is proposing throttling the number of PIDs per 
KafkaPrincipal {color}

{color:#172b4d}See the KIP-936 details here  
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-936%3A+Throttle+number+of+active+PIDs]
 {color}

  was:
{color:#172b4d}Ticket to track KIP-936. Since KIP-679 idempotent{color} 
{color:#172b4d}producers became the default in Kafka as a result of this all 
producer instances will be assigned PID. The increase of the number of PIDs 
stored in Kafka brokers by {color}{{ProducerStateManager}}{color:#172b4d} 
exposes the broker to OOM errors if it has a high number of producers, a rogue 
or misconfigured client(s).{color}

{color:#172b4d}The broker is still exposed to OOM{color} even after KIP-854 
introduced a separate config to expire PID from transaction IDs if there is a 
high number of PID before {{producer.id.expiration.ms}} is exceeded. 

As a result of this, the broker will keep experiencing OOM and become offline. 
The only way to recover from this is to increase the heap.  

 

{color:#172b4d}KIP-936 is proposing throttling the number of PIDs per 
KafkaPrincipal {color}

{color:#172b4d}See the KIP details here  
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-936%3A+Throttle+number+of+active+PIDs]
 {color}


> Throttle number of active PIDs
> --
>
> Key: KAFKA-15063
> URL: https://issues.apache.org/jira/browse/KAFKA-15063
> Project: Kafka
>  Issue Type: New Feature
>  Components: core, producer 
>Affects Versions: 2.8.0, 3.1.0, 3.0.0, 3.2.0, 3.3, 3.4.0
>Reporter: Omnia Ibrahim
>Priority: Major
>
> {color:#172b4d}Ticket to track KIP-936. Since KIP-679 idempotent{color} 
> {color:#172b4d}producers became the default in Kafka as a result of this all 
> producer instances will be assigned PID. The increase of the number of PIDs 
> stored in Kafka brokers by {color}{{ProducerStateManager}}{color:#172b4d} 
> exposes the broker to OOM errors if it has a high number of producers, a 
> rogue or misconfigured client(s).{color}
> {color:#172b4d}The broker is still exposed to OOM{color} even after KIP-854 
> introduced a separate config to expire PID from transaction IDs if there is a 
> high number of PID before {{producer.id.expiration.ms}} is exceeded.
> As a result of this, the broker will keep experiencing OOM and become 
> offline. The only way to recover from this is to increase the heap.  
>  
> {color:#172b4d}KIP-936 is proposing throttling the number of PIDs per 
> KafkaPrincipal {color}
> {color:#172b4d}See the KIP-936 details here  
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-936%3A+Throttle+number+of+active+PIDs]
>  {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15063) Throttle number of active PIDs

2023-06-06 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim updated KAFKA-15063:
--
Description: 
{color:#172b4d}Ticket to track KIP-936. Since KIP-679 idempotent{color} 
{color:#172b4d}producers became the default in Kafka as a result of this all 
producer instances will be assigned PID. The increase of the number of PIDs 
stored in Kafka brokers by {color}{{ProducerStateManager}}{color:#172b4d} 
exposes the broker to OOM errors if it has a high number of producers, a rogue 
or misconfigured client(s).{color}

{color:#172b4d}The broker is still exposed to OOM{color} even after KIP-854 
introduced a separate config to expire PID from transaction IDs if there is a 
high number of PID before {{producer.id.expiration.ms}} is exceeded. 

As a result of this, the broker will keep experiencing OOM and become offline. 
The only way to recover from this is to increase the heap.  

 

{color:#172b4d}KIP-936 is proposing throttling the number of PIDs per 
KafkaPrincipal {color}

{color:#172b4d}See the KIP details here  
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-936%3A+Throttle+number+of+active+PIDs]
 {color}

  was:
{color:#172b4d}Ticket to track to track KIP-936. Since KIP-679 
i{color:#172b4d}dempotent{color} {color:#172b4d}producers became the default in 
Kafka {color:#172b4d}as a result of this all producer instances will be 
assigned PID. The increase of number of PIDs stored in Kafka brokers by 
{color}{{ProducerStateManager}}{color:#172b4d} exposes the broker to OOM errors 
if it has a high number of producers, rogue or misconfigured client(s).{color} 
{color}{color}

{color:#172b4d}{color:#172b4d}{color:#172b4d}{color:#172b4d}{color:#172b4d}{color:#172b4d}The
 broker is still exposed to OOM{color}{color}{color} even after KIP-854 
introduced a separated config to expire PID from transaction IDs if there is 
high number of PID before {color}{{producer.id.expiration.ms}}{color:#172b4d} 
is exceeded. 
{color}{color}{color}

As a result of this the broker will keep experincing OOM and become offline. 
The only way to recover from this is to increase the heap.  

 

{color:#172b4d}KIP-936 is proposing throttling number of PIDs per 
KafkaPrincipal {color}

{color:#172b4d}See 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-936%3A+Throttle+number+of+active+PIDs]
 {color}


> Throttle number of active PIDs
> --
>
> Key: KAFKA-15063
> URL: https://issues.apache.org/jira/browse/KAFKA-15063
> Project: Kafka
>  Issue Type: New Feature
>  Components: core, producer 
>Affects Versions: 2.8.0, 3.1.0, 3.0.0, 3.2.0, 3.3, 3.4.0
>Reporter: Omnia Ibrahim
>Priority: Major
>
> {color:#172b4d}Ticket to track KIP-936. Since KIP-679 idempotent{color} 
> {color:#172b4d}producers became the default in Kafka as a result of this all 
> producer instances will be assigned PID. The increase of the number of PIDs 
> stored in Kafka brokers by {color}{{ProducerStateManager}}{color:#172b4d} 
> exposes the broker to OOM errors if it has a high number of producers, a 
> rogue or misconfigured client(s).{color}
> {color:#172b4d}The broker is still exposed to OOM{color} even after KIP-854 
> introduced a separate config to expire PID from transaction IDs if there is a 
> high number of PID before {{producer.id.expiration.ms}} is exceeded. 
> As a result of this, the broker will keep experiencing OOM and become 
> offline. The only way to recover from this is to increase the heap.  
>  
> {color:#172b4d}KIP-936 is proposing throttling the number of PIDs per 
> KafkaPrincipal {color}
> {color:#172b4d}See the KIP details here  
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-936%3A+Throttle+number+of+active+PIDs]
>  {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15063) Throttle number of active PIDs

2023-06-06 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-15063:
-

 Summary: Throttle number of active PIDs
 Key: KAFKA-15063
 URL: https://issues.apache.org/jira/browse/KAFKA-15063
 Project: Kafka
  Issue Type: New Feature
  Components: core, producer 
Affects Versions: 3.4.0, 3.2.0, 3.0.0, 3.1.0, 2.8.0, 3.3
Reporter: Omnia Ibrahim


{color:#172b4d}Ticket to track to track KIP-936. Since KIP-679 
i{color:#172b4d}dempotent{color} {color:#172b4d}producers became the default in 
Kafka {color:#172b4d}as a result of this all producer instances will be 
assigned PID. The increase of number of PIDs stored in Kafka brokers by 
{color}{{ProducerStateManager}}{color:#172b4d} exposes the broker to OOM errors 
if it has a high number of producers, rogue or misconfigured client(s).{color} 
{color}{color}

{color:#172b4d}{color:#172b4d}{color:#172b4d}{color:#172b4d}{color:#172b4d}{color:#172b4d}The
 broker is still exposed to OOM{color}{color}{color} even after KIP-854 
introduced a separated config to expire PID from transaction IDs if there is 
high number of PID before {color}{{producer.id.expiration.ms}}{color:#172b4d} 
is exceeded. 
{color}{color}{color}

As a result of this the broker will keep experincing OOM and become offline. 
The only way to recover from this is to increase the heap.  

 

{color:#172b4d}KIP-936 is proposing throttling number of PIDs per 
KafkaPrincipal {color}

{color:#172b4d}See 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-936%3A+Throttle+number+of+active+PIDs]
 {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-10897) kafka quota optimization

2023-05-31 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728074#comment-17728074
 ] 

Omnia Ibrahim commented on KAFKA-10897:
---

Hi [~afshing]  to propose any API changes you need to write a Kafka Improvement 
Proposal (known as KIP) you can find the details here 
[https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]

> kafka quota optimization
> 
>
> Key: KAFKA-10897
> URL: https://issues.apache.org/jira/browse/KAFKA-10897
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, clients, config, consumer, core
>Affects Versions: 2.7.0
>Reporter: yangyijun
>Assignee: Kahn Cheny
>Priority: Blocker
>
> *1.The current quota dimensions is as follows:*
> {code:java}
> /config/users//clients/
> /config/users//clients/
> /config/users/
> /config/users//clients/
> /config/users//clients/
> /config/users/
> /config/clients/
> /config/clients/{code}
> *2. Existing problems:*
>  
> {code:java}
> 2.1.The quota dimensions is not fine enough.
> 2.2.When multiple users on the same broker produce and consume a large amount 
> of data at the same time, if you want the broker to run normally, you must 
> make the sum of all user quota byte not exceed the upper throughput limit of 
> the broker.
> 2.3.Even if all the user rate does not reach the upper limit of the broker, 
> but all the user rate is concentrated on a few disks and exceeds the 
> read-write load of the disk, all the produce and consume requests will be 
> blocked.
> 2.4.Sometimes it's just one topic rate increase sharply under the user, so we 
> just need to limit the increase sharply topics.
> {code}
>  
> *3. Suggestions for improvement*
> {code:java}
> 3.1. Add the upper limit of single broker quota byte.
> 3.2. Add the upper limit of single disk quota byte on the broker.
> 3.3. Add topic quota dimensions.{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14737) Move kafka.utils.json to server-common

2023-02-26 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693654#comment-17693654
 ] 

Omnia Ibrahim commented on KAFKA-14737:
---

I will add a similar class in `server-common` to unblock the moving these 
commands out of core. And later we can decide if we need need to switch to this 
server-common one everywhere or not. 

> Move kafka.utils.json to server-common
> --
>
> Key: KAFKA-14737
> URL: https://issues.apache.org/jira/browse/KAFKA-14737
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The JSON utils are used by a few tools (DeleteRecordsCommand, 
> ReassignPartitionsCommand and LeaderElectionCommand) and also by a few other 
> classes in core.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14588) Move ConfigCommand to tools

2023-02-26 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693651#comment-17693651
 ] 

Omnia Ibrahim commented on KAFKA-14588:
---

[~mimaison] `ConfigCommandToTools` depends on `kafka.server` are we moving this 
package out of core as well?

> Move ConfigCommand to tools
> ---
>
> Key: KAFKA-14588
> URL: https://issues.apache.org/jira/browse/KAFKA-14588
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14587) Move AclCommand to tools

2023-02-26 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693514#comment-17693514
 ] 

Omnia Ibrahim edited comment on KAFKA-14587 at 2/26/23 2:13 PM:


[~mimaison] AclCommand  depends on `kafka.security.authorizer` and 
`kafka.server` I can see tasks for moving ``kafka.security.authorizer` out of 
core however I can't see any tasks for moving `kafka.server` do we have any 
plans for this? 

The test relay on `kafka.server.KafkaBroker` as well. 


was (Author: omnia_h_ibrahim):
[~mimaison] AclCommand  depends on `kafka.security.authorizer` and 
`kafka.server` I can see tasks for moving ``kafka.security.authorizer` out of 
core however I can't see any tasks for moving `kafka.server.KafkaConfig` do we 
have any plans for this? 

> Move AclCommand to tools
> 
>
> Key: KAFKA-14587
> URL: https://issues.apache.org/jira/browse/KAFKA-14587
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14587) Move AclCommand to tools

2023-02-25 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693514#comment-17693514
 ] 

Omnia Ibrahim commented on KAFKA-14587:
---

[~mimaison] AclCommand  depends on `kafka.security.authorizer` and 
`kafka.server` I can see tasks for moving ``kafka.security.authorizer` out of 
core however I can't see any tasks for moving `kafka.server.KafkaConfig` do we 
have any plans for this? 

> Move AclCommand to tools
> 
>
> Key: KAFKA-14587
> URL: https://issues.apache.org/jira/browse/KAFKA-14587
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14737) Move kafka.utils.json to server-common

2023-02-21 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-14737:
-

Assignee: Omnia Ibrahim

> Move kafka.utils.json to server-common
> --
>
> Key: KAFKA-14737
> URL: https://issues.apache.org/jira/browse/KAFKA-14737
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>
> The JSON utils are used by a few tools (DeleteRecordsCommand, 
> ReassignPartitionsCommand and LeaderElectionCommand) and also by a few other 
> classes in core.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14595) Move ReassignPartitionsCommand to tools

2023-02-06 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684635#comment-17684635
 ] 

Omnia Ibrahim edited comment on KAFKA-14595 at 2/6/23 12:41 PM:


Hi [~nizhikov], just a note, I moved the methods 
`{{{}TestUtils.setReplicationThrottleForPartitions{}}}` and 
`{{{}TestUtils.removeReplicationThrottleForPartitions`  from 
`{}}}{{{}TestUtils` to `{}}}{{{}ToolsTestUtils{}}}{{{}` {}}}{{ as they are used 
only }} by `TopicCommand` and `ReassignPartitionCommand`.  To avoid the 
converting between Scala and Java collections. The changes are here 
[https://github.com/apache/kafka/pull/13201 
|https://github.com/apache/kafka/pull/13201]


was (Author: omnia_h_ibrahim):
Hi [~nizhikov], just a note, I moved the methods 
`{{{}TestUtils.setReplicationThrottleForPartitions{}}}` and 
`{{{}TestUtils.removeReplicationThrottleForPartitions`  from 
`{}}}{{{}TestUtils` to `{}}}{{{}ToolsTestUtils{}}}{{{}` {}}}{{ as they are used 
only }} by `TopicCommand` and `ReassignPartitionCommand`.  To avoid the 
converting between Scala and Java collections. The changes are here 
https://github.com/apache/kafka/pull/13201{{{}{}}}

> Move ReassignPartitionsCommand to tools
> ---
>
> Key: KAFKA-14595
> URL: https://issues.apache.org/jira/browse/KAFKA-14595
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Nikolay Izhikov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14595) Move ReassignPartitionsCommand to tools

2023-02-06 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684635#comment-17684635
 ] 

Omnia Ibrahim commented on KAFKA-14595:
---

Hi [~nizhikov], just a note, I moved the methods 
`{{{}TestUtils.setReplicationThrottleForPartitions{}}}` and 
`{{{}TestUtils.removeReplicationThrottleForPartitions`  from 
`{}}}{{{}TestUtils` to `{}}}{{{}ToolsTestUtils{}}}{{{}` {}}}{{ as they are used 
only }} by `TopicCommand` and `ReassignPartitionCommand`.  To avoid the 
converting between Scala and Java collections. The changes are here 
https://github.com/apache/kafka/pull/13201{{{}{}}}

> Move ReassignPartitionsCommand to tools
> ---
>
> Key: KAFKA-14595
> URL: https://issues.apache.org/jira/browse/KAFKA-14595
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Nikolay Izhikov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14578) Move ConsumerPerformance to tools

2023-02-05 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-14578:
-

Assignee: (was: Omnia Ibrahim)

> Move ConsumerPerformance to tools
> -
>
> Key: KAFKA-14578
> URL: https://issues.apache.org/jira/browse/KAFKA-14578
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14593) Move LeaderElectionCommand to tools

2023-02-04 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684196#comment-17684196
 ] 

Omnia Ibrahim commented on KAFKA-14593:
---

Hi [~adupriez] sorry for late replay. The Jira is in progress. I unassigned 
myself from another 2 Jiras you can have a look into these. 
Thanks

> Move LeaderElectionCommand to tools
> ---
>
> Key: KAFKA-14593
> URL: https://issues.apache.org/jira/browse/KAFKA-14593
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14577) Move ConsoleProducer to tools

2023-02-03 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-14577:
-

Assignee: (was: Omnia Ibrahim)

> Move ConsoleProducer to tools
> -
>
> Key: KAFKA-14577
> URL: https://issues.apache.org/jira/browse/KAFKA-14577
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14579) Move DumpLogSegments to tools

2023-02-03 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim reassigned KAFKA-14579:
-

Assignee: (was: Omnia Ibrahim)

> Move DumpLogSegments to tools
> -
>
> Key: KAFKA-14579
> URL: https://issues.apache.org/jira/browse/KAFKA-14579
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14595) Move ReassignPartitionsCommand to tools

2023-02-02 Thread Omnia Ibrahim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683377#comment-17683377
 ] 

Omnia Ibrahim commented on KAFKA-14595:
---

[~nizhikov] sure

> Move ReassignPartitionsCommand to tools
> ---
>
> Key: KAFKA-14595
> URL: https://issues.apache.org/jira/browse/KAFKA-14595
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Omnia Ibrahim
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >