[GitHub] [kafka] Owen-CH-Leung commented on a diff in pull request #14057: KAFKA-15194-Prepend-Offset-as-Filename

2023-07-20 Thread via GitHub


Owen-CH-Leung commented on code in PR #14057:
URL: https://github.com/apache/kafka/pull/14057#discussion_r1270242829


##
storage/src/test/java/org/apache/kafka/server/log/remote/storage/RemoteLogSegmentFileset.java:
##
@@ -59,9 +59,9 @@
  * the local tiered storage:
  *
  * 
- * / storage-directory / topic-partition-uuidBase64 / 
oAtiIQ95REujbuzNd_lkLQ.log
- *  . 
oAtiIQ95REujbuzNd_lkLQ.index
- *  . 
oAtiIQ95REujbuzNd_lkLQ.timeindex
+ * / storage-directory / topic-partition-uuidBase64 / 
startOffset-oAtiIQ95REujbuzNd_lkLQ.log

Review Comment:
   Sure. I've added a dummy value there.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Owen-CH-Leung commented on a diff in pull request #14057: KAFKA-15194-Prepend-Offset-as-Filename

2023-07-20 Thread via GitHub


Owen-CH-Leung commented on code in PR #14057:
URL: https://github.com/apache/kafka/pull/14057#discussion_r1270242568


##
storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorageTest.java:
##
@@ -399,20 +403,21 @@ public Verifier(final LocalTieredStorage remoteStorage, 
final TopicIdPartition t
 this.topicIdPartition = requireNonNull(topicIdPartition);
 }
 
-private List expectedPaths(final RemoteLogSegmentId id) {
+private List expectedPaths(final RemoteLogSegmentMetadata 
metadata) {
 final String rootPath = getStorageRootDirectory();
 TopicPartition tp = topicIdPartition.topicPartition();
 final String topicPartitionSubpath = format("%s-%d-%s", 
tp.topic(), tp.partition(),
 topicIdPartition.topicId());
-final String uuid = id.id().toString();
+final String uuid = metadata.remoteLogSegmentId().id().toString();
+final long startOffset = metadata.startOffset();
 
 return Arrays.asList(
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.LOG_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TIME_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TXN_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LEADER_EPOCH_CHECKPOINT.getSuffix()),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.PRODUCER_SNAPSHOT_FILE_SUFFIX)
+Paths.get(rootPath, topicPartitionSubpath, startOffset + 
"-" + uuid + LogFileUtils.LOG_FILE_SUFFIX),

Review Comment:
   @divijvaidya I've changed the code to use 
`LogFileUtils#filenamePrefixFromOffset(long offset)`. The filename now should 
look like a real log file implementation like 
`0011-oAtiIQ95REujbuzNd_lkLQ.log`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15223) Need more clarity in documentation for upgrade/downgrade procedures and limitations across releases.

2023-07-20 Thread kaushik srinivas (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745419#comment-17745419
 ] 

kaushik srinivas commented on KAFKA-15223:
--

[~ijuma] , Can you please help us with this ?

> Need more clarity in documentation for upgrade/downgrade procedures and 
> limitations across releases.
> 
>
> Key: KAFKA-15223
> URL: https://issues.apache.org/jira/browse/KAFKA-15223
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 3.4.0, 3.3.1, 3.3.2, 3.5.0, 3.4.1
>Reporter: kaushik srinivas
>Priority: Critical
>
> Referring to the upgrade documentation for apache kafka.
> [https://kafka.apache.org/34/documentation.html#upgrade_3_4_0]
> There is confusion with respect to below statements from the above sectioned 
> link of apache docs.
> "If you are upgrading from a version prior to 2.1.x, please see the note 
> below about the change to the schema used to store consumer offsets. *Once 
> you have changed the inter.broker.protocol.version to the latest version, it 
> will not be possible to downgrade to a version prior to 2.1."*
> The above statement mentions that the downgrade would not be possible to 
> version prior to "2.1" in case of "upgrading the 
> inter.broker.protocol.version to the latest version".
> But, there is another statement made in the documentation in *point 4* as 
> below
> "Restart the brokers one by one for the new protocol version to take effect. 
> {*}Once the brokers begin using the latest protocol version, it will no 
> longer be possible to downgrade the cluster to an older version.{*}"
>  
> These two statements are repeated across a lot of prior releases of kafka and 
> is confusing.
> Below are the questions:
>  # Is downgrade not at all possible to *"any"* older version of kafka once 
> the inter.broker.protocol.version is updated to latest version *OR* 
> downgrades are not possible only to versions *"<2.1"* ?
>  # Suppose one takes an approach similar to upgrade even for the downgrade 
> path. i.e. downgrade the inter.broker.protocol.version first to the previous 
> version, next downgrade the software/code of kafka to previous release 
> revision. Does downgrade work with this approach ?
> Can these two questions be documented if the results are already known ?
> Maybe a downgrade guide can be created too similar to the existing upgrade 
> guide ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


RE: [GitHub] [kafka] stevenbooke commented on a diff in pull request #13842: KAFKA-14995: Automate asf.yaml collaborators refresh

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: stevenbooke (via GitHub) [mailto:g...@apache.org] 
Sent: 21 July 2023 06:05
To: jira@kafka.apache.org
Subject: [GitHub] [kafka] stevenbooke commented on a diff in pull request 
#13842: KAFKA-14995: Automate asf.yaml collaborators refresh


stevenbooke commented on code in PR #13842:
URL: https://github.com/apache/kafka/pull/13842#discussion_r1270124376


##
refresh-collaborators.py:
##
@@ -35,7 +36,10 @@
 contributors_login_to_commit_volume = {}  end_date = datetime.now()  
start_date = end_date - timedelta(days=365)
+repo = g.get_repo("apache/kafka")
 for commit in repo.get_commits(since=start_date, until=end_date):
+if commit.author is None and commit.author.login is None:

Review Comment:
   Correct, will change.



--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the URL above to go 
to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [GitHub] [kafka] stevenbooke commented on a diff in pull request #13842: KAFKA-14995: Automate asf.yaml collaborators refresh

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: stevenbooke (via GitHub) [mailto:g...@apache.org] 
Sent: 21 July 2023 07:48
To: jira@kafka.apache.org
Subject: [GitHub] [kafka] stevenbooke commented on a diff in pull request 
#13842: KAFKA-14995: Automate asf.yaml collaborators refresh


stevenbooke commented on code in PR #13842:
URL: https://github.com/apache/kafka/pull/13842#discussion_r1270163490


##
refresh-collaborators.py:
##
@@ -59,9 +64,20 @@
 yaml_content["github"]["collaborators"] = collaborators
 
 # Convert the updated content back to YAML -updated_yaml = 
yaml.safe_dump(yaml_content)
+updated_yaml = io.StringIO()
+yml.dump(yaml_content, updated_yaml)
+updated_yaml_str = updated_yaml.getvalue()
 
-# Commit and push the changes
+# Create a new branch for the changes
+new_branch_name = "update-asf.yaml-github-whitelist-and-collaborators"

Review Comment:
   I look into this more in the coming days and post an update.



--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the URL above to go 
to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [GitHub] [kafka] showuon merged pull request #13999: KAFKA-15176: add tests for tiered storage metrics

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: showuon (via GitHub) [mailto:g...@apache.org] 
Sent: 21 July 2023 08:01
To: jira@kafka.apache.org
Subject: [GitHub] [kafka] showuon merged pull request #13999: KAFKA-15176: add 
tests for tiered storage metrics


showuon merged PR #13999:
URL: https://github.com/apache/kafka/pull/13999


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the URL above to go 
to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [GitHub] [kafka] showuon commented on pull request #14045: MINOR: refactor(storage): topic-based RLMM consumer-manager/task related improvements

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: showuon (via GitHub) [mailto:g...@apache.org] 
Sent: 21 July 2023 08:02
To: jira@kafka.apache.org
Subject: [GitHub] [kafka] showuon commented on pull request #14045: MINOR: 
refactor(storage): topic-based RLMM consumer-manager/task related improvements


showuon commented on PR #14045:
URL: https://github.com/apache/kafka/pull/14045#issuecomment-1644902541

   Rerunning CI build: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14045/6/


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the URL above to go 
to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [jira] [Resolved] (KAFKA-15176) Add missing tests for remote storage metrics

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: Luke Chen (Jira) [mailto:j...@apache.org] 
Sent: 21 July 2023 08:01
To: jira@kafka.apache.org
Subject: [jira] [Resolved] (KAFKA-15176) Add missing tests for remote storage 
metrics


 [ 
https://issues.apache.org/jira/browse/KAFKA-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-15176.
---
Fix Version/s: 3.6.0
   Resolution: Fixed

> Add missing tests for remote storage metrics
> 
>
> Key: KAFKA-15176
> URL: https://issues.apache.org/jira/browse/KAFKA-15176
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.6.0
>
>
> {{RemoteLogReaderTaskQueueSize}}
> {{RemoteLogReaderAvgIdlePercent}}
> {{RemoteLogManagerTasksAvgIdlePercent}}
> {{}}
> https://github.com/apache/kafka/pull/13944#pullrequestreview-1513943273{{{}{}}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)



RE: [GitHub] [kafka] github-actions[bot] commented on pull request #13619: Initial support for OpenJDK CRaC snapshotting

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??



-Original Message-
From: github-actions[bot] (via GitHub) [mailto:g...@apache.org] 
Sent: 21 July 2023 09:03
To: jira@kafka.apache.org
Subject: [GitHub] [kafka] github-actions[bot] commented on pull request #13619: 
Initial support for OpenJDK CRaC snapshotting


github-actions[bot] commented on PR #13619:
URL: https://github.com/apache/kafka/pull/13619#issuecomment-1644936041

   This PR is being marked as stale since it has not had any activity in 90 
days. If you would like to keep this PR alive, please ask a committer for 
review. If the PR has  merge conflicts, please update it with the latest from 
trunk (or appropriate release branch)  If this PR is no longer valid or 
desired, please feel free to close it. If no activity occurrs in the next 30 
days, it will be automatically closed.


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the URL above to go 
to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [jira] [Commented] (KAFKA-14780) Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay deterministic

2023-07-20 Thread miltan
Hi Jira,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: Fei Xie (Jira) [mailto:j...@apache.org] 
Sent: 21 July 2023 09:31
To: jira@kafka.apache.org
Subject: [jira] [Commented] (KAFKA-14780) Make 
RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay deterministic


[ 
https://issues.apache.org/jira/browse/KAFKA-14780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745366#comment-17745366
 ] 

Fei Xie commented on KAFKA-14780:
-

Hello there, is anyone actively working on this ticket? If not, could you 
assign this ticket to me please? Thx!

> Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay 
> deterministic
> 
>
> Key: KAFKA-14780
> URL: https://issues.apache.org/jira/browse/KAFKA-14780
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Alexandre Dupriez
>Assignee: Alexandre Dupriez
>Priority: Minor
>
> The test {{RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay}} 
> relies on the actual system clock which makes it frequently fail on my poor 
> intellij setup.
>  
> The {{{}RefreshingHttpsJwks{}}}`component creates and uses a scheduled 
> executor service. We could expose the scheduling mechanism to be able to mock 
> its behaviour. One way to do could be to use the {{KafkaScheduler}} which has 
> a {{MockScheduler}} implementation which relies on {{MockTime}} instead of 
> the real time clock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)



RE: [jira] [Comment Edited] (KAFKA-14780) Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay deterministic

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: Fei Xie (Jira) [mailto:j...@apache.org] 
Sent: 21 July 2023 09:35
To: jira@kafka.apache.org
Subject: [jira] [Comment Edited] (KAFKA-14780) Make 
RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay deterministic


[ 
https://issues.apache.org/jira/browse/KAFKA-14780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745366#comment-17745366
 ] 

Fei Xie edited comment on KAFKA-14780 at 7/21/23 4:04 AM:
--

Hi [~adupriez], is anyone actively working on this ticket? If not, could you 
assign this ticket to me, please? Thx!


was (Author: JIRAUSER301434):
Hello there, is anyone actively working on this ticket? If not, could you 
assign this ticket to me please? Thx!

> Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay 
> deterministic
> 
>
> Key: KAFKA-14780
> URL: https://issues.apache.org/jira/browse/KAFKA-14780
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Alexandre Dupriez
>Assignee: Alexandre Dupriez
>Priority: Minor
>
> The test {{RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay}} 
> relies on the actual system clock which makes it frequently fail on my poor 
> intellij setup.
>  
> The {{{}RefreshingHttpsJwks{}}}`component creates and uses a scheduled 
> executor service. We could expose the scheduling mechanism to be able to mock 
> its behaviour. One way to do could be to use the {{KafkaScheduler}} which has 
> a {{MockScheduler}} implementation which relies on {{MockTime}} instead of 
> the real time clock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)



RE: [GitHub] [kafka] satishd opened a new pull request, #14065: MINOR Fix the build failure.

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: satishd (via GitHub) [mailto:g...@apache.org] 
Sent: 21 July 2023 10:11
To: jira@kafka.apache.org
Subject: [GitHub] [kafka] satishd opened a new pull request, #14065: MINOR Fix 
the build failure.


satishd opened a new pull request, #14065:
URL: https://github.com/apache/kafka/pull/14065

   Fixing the build failure caused by the earlier commit 
https://github.com/apache/kafka/commit/27ea025e33aab525e96bef24840414f7a4e132f1 
   
   
   ```
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3526:77:
 the result type of an implicit conversion must be more specific than Object
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3530:70:
 the result type of an implicit conversion must be more specific than Object
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateBrokerIdTest.scala:23:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateClusterIdTest.scala:29:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/utils/TestUtils.scala:1438:15:
 ambiguous reference to overloaded definition,
   both method doReturn in class Mockito of type (x$1: Any, x$2: 
Object*)org.mockito.stubbing.Stubber
   and  method doReturn in class Mockito of type (x$1: 
Any)org.mockito.stubbing.Stubber
   match argument types (kafka.log.UnifiedLog)
   ```
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the URL above to go 
to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [GitHub] [kafka] satishd commented on pull request #13999: KAFKA-15176: add tests for tiered storage metrics

2023-07-20 Thread miltan
Hi Team,

Greetings,

We actually reached out to you for Oracle/ IT / SAP / Infor / Microsoft "VOTEC 
IT SERVICE PARTNERSHIP"  "IT SERVICE OUTSOURCING" " "PARTNER SERVICE 
SUBCONTRACTING"

We have very attractive newly introduce reasonably price PARTNER IT SERVICE ODC 
SUBCONTRACTING MODEL in USA, Philippines, India and Singapore etc with White 
Label Model.

Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee 
payroll, Help partner to get profit more than 50% on each project.. ..We really 
mean it.

We are already working with platinum partner like NTT DATA, NEC Singapore, 
Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.

Are u keen to understand VOTEC IT PARTNERSHIP offerings? Looping KB 
kail...@votecgroup.com | Partnership In charge |

Let us know your availability this week OR Next week??




-Original Message-
From: satishd (via GitHub) [mailto:g...@apache.org] 
Sent: 21 July 2023 10:23
To: jira@kafka.apache.org
Subject: [GitHub] [kafka] satishd commented on pull request #13999: 
KAFKA-15176: add tests for tiered storage metrics


satishd commented on PR #13999:
URL: https://github.com/apache/kafka/pull/13999#issuecomment-1644979117

This commit caused the below build failure. I raised 
https://github.com/apache/kafka/pull/14065 to fix it. 
   
   ```
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3526:77:
 the result type of an implicit conversion must be more specific than Object
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3530:70:
 the result type of an implicit conversion must be more specific than Object
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateBrokerIdTest.scala:23:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateClusterIdTest.scala:29:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/utils/TestUtils.scala:1438:15:
 ambiguous reference to overloaded definition,
   both method doReturn in class Mockito of type (x$1: Any, x$2: 
Object*)org.mockito.stubbing.Stubber
   and  method doReturn in class Mockito of type (x$1: 
Any)org.mockito.stubbing.Stubber
   match argument types (kafka.log.UnifiedLog)
   ```


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the URL above to go 
to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd commented on pull request #13999: KAFKA-15176: add tests for tiered storage metrics

2023-07-20 Thread via GitHub


satishd commented on PR #13999:
URL: https://github.com/apache/kafka/pull/13999#issuecomment-1644979117

This commit caused the below build failure. I raised 
https://github.com/apache/kafka/pull/14065 to fix it. 
   
   ```
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3526:77:
 the result type of an implicit conversion must be more specific than Object
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3530:70:
 the result type of an implicit conversion must be more specific than Object
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateBrokerIdTest.scala:23:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateClusterIdTest.scala:29:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/utils/TestUtils.scala:1438:15:
 ambiguous reference to overloaded definition,
   both method doReturn in class Mockito of type (x$1: Any, x$2: 
Object*)org.mockito.stubbing.Stubber
   and  method doReturn in class Mockito of type (x$1: 
Any)org.mockito.stubbing.Stubber
   match argument types (kafka.log.UnifiedLog)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd opened a new pull request, #14065: MINOR Fix the build failure.

2023-07-20 Thread via GitHub


satishd opened a new pull request, #14065:
URL: https://github.com/apache/kafka/pull/14065

   Fixing the build failure caused by the earlier commit 
https://github.com/apache/kafka/commit/27ea025e33aab525e96bef24840414f7a4e132f1 
   
   
   ```
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3526:77:
 the result type of an implicit conversion must be more specific than Object
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala:3530:70:
 the result type of an implicit conversion must be more specific than Object
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateBrokerIdTest.scala:23:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Warn] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/server/ServerGenerateClusterIdTest.scala:29:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
   [Error] 
/Users/satishd/repos/apache-kafka/core/src/test/scala/unit/kafka/utils/TestUtils.scala:1438:15:
 ambiguous reference to overloaded definition,
   both method doReturn in class Mockito of type (x$1: Any, x$2: 
Object*)org.mockito.stubbing.Stubber
   and  method doReturn in class Mockito of type (x$1: 
Any)org.mockito.stubbing.Stubber
   match argument types (kafka.log.UnifiedLog)
   ```
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-14780) Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay deterministic

2023-07-20 Thread Fei Xie (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745366#comment-17745366
 ] 

Fei Xie edited comment on KAFKA-14780 at 7/21/23 4:04 AM:
--

Hi [~adupriez], is anyone actively working on this ticket? If not, could you 
assign this ticket to me, please? Thx!


was (Author: JIRAUSER301434):
Hello there, is anyone actively working on this ticket? If not, could you 
assign this ticket to me please? Thx!

> Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay 
> deterministic
> 
>
> Key: KAFKA-14780
> URL: https://issues.apache.org/jira/browse/KAFKA-14780
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Alexandre Dupriez
>Assignee: Alexandre Dupriez
>Priority: Minor
>
> The test {{RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay}} 
> relies on the actual system clock which makes it frequently fail on my poor 
> intellij setup.
>  
> The {{{}RefreshingHttpsJwks{}}}`component creates and uses a scheduled 
> executor service. We could expose the scheduling mechanism to be able to mock 
> its behaviour. One way to do could be to use the {{KafkaScheduler}} which has 
> a {{MockScheduler}} implementation which relies on {{MockTime}} instead of 
> the real time clock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14780) Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay deterministic

2023-07-20 Thread Fei Xie (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745366#comment-17745366
 ] 

Fei Xie commented on KAFKA-14780:
-

Hello there, is anyone actively working on this ticket? If not, could you 
assign this ticket to me please? Thx!

> Make RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay 
> deterministic
> 
>
> Key: KAFKA-14780
> URL: https://issues.apache.org/jira/browse/KAFKA-14780
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Alexandre Dupriez
>Assignee: Alexandre Dupriez
>Priority: Minor
>
> The test {{RefreshingHttpsJwksTest#testSecondaryRefreshAfterElapsedDelay}} 
> relies on the actual system clock which makes it frequently fail on my poor 
> intellij setup.
>  
> The {{{}RefreshingHttpsJwks{}}}`component creates and uses a scheduled 
> executor service. We could expose the scheduling mechanism to be able to mock 
> its behaviour. One way to do could be to use the {{KafkaScheduler}} which has 
> a {{MockScheduler}} implementation which relies on {{MockTime}} instead of 
> the real time clock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] github-actions[bot] commented on pull request #13619: Initial support for OpenJDK CRaC snapshotting

2023-07-20 Thread via GitHub


github-actions[bot] commented on PR #13619:
URL: https://github.com/apache/kafka/pull/13619#issuecomment-1644936041

   This PR is being marked as stale since it has not had any activity in 90 
days. If you would like to keep this PR alive, please ask a committer for 
review. If the PR has  merge conflicts, please update it with the latest from 
trunk (or appropriate release branch)  If this PR is no longer valid or 
desired, please feel free to close it. If no activity occurrs in the next 30 
days, it will be automatically closed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon commented on pull request #14045: MINOR: refactor(storage): topic-based RLMM consumer-manager/task related improvements

2023-07-20 Thread via GitHub


showuon commented on PR #14045:
URL: https://github.com/apache/kafka/pull/14045#issuecomment-1644902541

   Rerunning CI build: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14045/6/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-15176) Add missing tests for remote storage metrics

2023-07-20 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-15176.
---
Fix Version/s: 3.6.0
   Resolution: Fixed

> Add missing tests for remote storage metrics
> 
>
> Key: KAFKA-15176
> URL: https://issues.apache.org/jira/browse/KAFKA-15176
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.6.0
>
>
> {{RemoteLogReaderTaskQueueSize}}
> {{RemoteLogReaderAvgIdlePercent}}
> {{RemoteLogManagerTasksAvgIdlePercent}}
> {{}}
> https://github.com/apache/kafka/pull/13944#pullrequestreview-1513943273{{{}{}}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] showuon merged pull request #13999: KAFKA-15176: add tests for tiered storage metrics

2023-07-20 Thread via GitHub


showuon merged PR #13999:
URL: https://github.com/apache/kafka/pull/13999


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] stevenbooke commented on a diff in pull request #13842: KAFKA-14995: Automate asf.yaml collaborators refresh

2023-07-20 Thread via GitHub


stevenbooke commented on code in PR #13842:
URL: https://github.com/apache/kafka/pull/13842#discussion_r1270163490


##
refresh-collaborators.py:
##
@@ -59,9 +64,20 @@
 yaml_content["github"]["collaborators"] = collaborators
 
 # Convert the updated content back to YAML
-updated_yaml = yaml.safe_dump(yaml_content)
+updated_yaml = io.StringIO()
+yml.dump(yaml_content, updated_yaml)
+updated_yaml_str = updated_yaml.getvalue()
 
-# Commit and push the changes
+# Create a new branch for the changes
+new_branch_name = "update-asf.yaml-github-whitelist-and-collaborators"

Review Comment:
   I look into this more in the coming days and post an update.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-12946) __consumer_offsets topic with very big partitions

2023-07-20 Thread zhangzhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17611018#comment-17611018
 ] 

zhangzhisheng edited comment on KAFKA-12946 at 7/21/23 2:15 AM:


upgrade version >=2.4.2


was (Author: zhangzs):
upgrade 2.4.2

> __consumer_offsets topic with very big partitions
> -
>
> Key: KAFKA-12946
> URL: https://issues.apache.org/jira/browse/KAFKA-12946
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.0.0
>Reporter: Emi
>Priority: Critical
>
> I am using Kafka 2.0.0 with java 8u191
>  There is a partitions of the __consumer_offsets topic that is 600 GB with 
> 6000 segments older than 4 months. Other partitions of that topic are small: 
> 20-30 MB.
> There are 60 consumer groups, 90 topics and 100 partitions per topic.
> There aren't errors in the logs. From the log of the logcleaner, I can see 
> that partition is never touched from the logcleaner thread for the 
> compaction, but it only add new segments.
>  How is this possible?
> There was another partition with the same problem, but after some months it 
> has been compacted. Now there is only one partition with this problem, but 
> this is bigger and keep growing
> I have used the kafka-dump-log tool to check these old segments and I can see 
> many duplicates. So I would assume that is not compacted.
> My settings:
>  {{offsets.commit.required.acks = -1}}
>  {{[offsets.commit.timeout.ms|http://offsets.commit.timeout.ms/]}} = 5000
>  {{offsets.load.buffer.size = 5242880}}
>  
> {{[offsets.retention.check.interval.ms|http://offsets.retention.check.interval.ms/]}}
>  = 60
>  {{offsets.retention.minutes = 10080}}
>  {{offsets.topic.compression.codec = 0}}
>  {{offsets.topic.num.partitions = 50}}
>  {{offsets.topic.replication.factor = 3}}
>  {{offsets.topic.segment.bytes = 104857600}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] jeffkbkim commented on a diff in pull request #14047: KAFKA-14499: [2/N] Add OffsetCommit record & related

2023-07-20 Thread via GitHub


jeffkbkim commented on code in PR #14047:
URL: https://github.com/apache/kafka/pull/14047#discussion_r1270140356


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/OffsetAndMetadata.java:
##
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.coordinator.group;
+
+import org.apache.kafka.common.record.RecordBatch;
+import org.apache.kafka.common.requests.OffsetCommitRequest;
+import org.apache.kafka.coordinator.group.generated.OffsetCommitValue;
+
+import java.util.Objects;
+import java.util.OptionalInt;
+import java.util.OptionalLong;
+
+/**
+ * Represents a committed offset with its metadata.

Review Comment:
   nit: i think the consumer package's OffsetAndMetadata has a good comment wrt 
the metadata:
   ```
   /**
* The Kafka offset commit API allows users to provide additional metadata 
(in the form of a string)
* when an offset is committed. This can be useful (for example) to store 
information about which
* node made the commit, what time the commit was made, etc.
*/
   ```
   
   can we include a bit of description on what the metadata is?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jeffkbkim commented on a diff in pull request #14046: KAFKA-14499: [1/N] Introduce OffsetCommit API version 9 and add new StaleMemberEpochException error

2023-07-20 Thread via GitHub


jeffkbkim commented on code in PR #14046:
URL: https://github.com/apache/kafka/pull/14046#discussion_r1270133576


##
clients/src/main/resources/common/message/OffsetCommitRequest.json:
##
@@ -31,13 +31,19 @@
   // version 7 adds a new field called groupInstanceId to indicate member 
identity across restarts.
   //
   // Version 8 is the first flexible version.
-  "validVersions": "0-8",
+  //
+  // Version 9 is the first version that can be used with the new consumer 
group protocol (KIP-848). The
+  // request is the same as version 8.
+  // Version 9 is added as part of KIP-848 and is still under development. 
Hence, the last version of the
+  // API is not exposed by default by brokers unless explicitly enabled.
+  "latestVersionUnstable": true,
+  "validVersions": "0-9",
   "flexibleVersions": "8+",
   "fields": [
 { "name": "GroupId", "type": "string", "versions": "0+", "entityType": 
"groupId",
   "about": "The unique group identifier." },
-{ "name": "GenerationId", "type": "int32", "versions": "1+", "default": 
"-1", "ignorable": true,
-  "about": "The generation of the group." },
+{ "name": "GenerationIdOrMemberEpoch", "type": "int32", "versions": "1+", 
"default": "-1", "ignorable": true,
+  "about": "The generation of the group if the generic group protocol or 
the member epoch if the consumer protocol." },

Review Comment:
   3 comments:
   * is it safe to change the field name?
   * > It based on the type of the group
   
   you're saying based on the consumer since we can have a group with both new 
& old consumers right?
   * nit: "if using ... if using ..."



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] stevenbooke commented on a diff in pull request #13842: KAFKA-14995: Automate asf.yaml collaborators refresh

2023-07-20 Thread via GitHub


stevenbooke commented on code in PR #13842:
URL: https://github.com/apache/kafka/pull/13842#discussion_r1270124376


##
refresh-collaborators.py:
##
@@ -35,7 +36,10 @@
 contributors_login_to_commit_volume = {}
 end_date = datetime.now()
 start_date = end_date - timedelta(days=365)
+repo = g.get_repo("apache/kafka")
 for commit in repo.get_commits(since=start_date, until=end_date):
+if commit.author is None and commit.author.login is None:

Review Comment:
   Correct, will change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] stevenbooke commented on a diff in pull request #13842: KAFKA-14995: Automate asf.yaml collaborators refresh

2023-07-20 Thread via GitHub


stevenbooke commented on code in PR #13842:
URL: https://github.com/apache/kafka/pull/13842#discussion_r1270123123


##
refresh-collaborators.py:
##
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import os
+import io
+from bs4 import BeautifulSoup
+from github import Github
+from ruamel.yaml import YAML
+from datetime import datetime, timedelta
+
+### GET THE NAMES OF THE KAFKA COMMITTERS FROM THE apache/kafka-site REPO ###
+github_token = os.environ.get('GITHUB_TOKEN')
+g = Github(github_token)
+repo = g.get_repo("apache/kafka-site")

Review Comment:
   We would not be able to retrieve the organization from the environment for 
"apache/kafka-site" due to the fact that "At the start of each workflow run, 
GitHub automatically creates a unique GITHUB_TOKEN secret to use in your 
workflow. You can use the GITHUB_TOKEN to authenticate in a workflow run.
   
   When you enable GitHub Actions, GitHub installs a GitHub App on your 
repository. The GITHUB_TOKEN secret is a GitHub App installation access token. 
You can use the installation access token to authenticate on behalf of the 
GitHub App installed on your repository. The token's permissions are limited to 
the repository that contains your workflow." See reference 
[here](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret).
   
   We would only be able to  retrieve the organization from the environment for 
"apache/kafka".



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


jolshan commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270114193


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   I think the issue in David's commit was that he didn't set the dev version 
to match the new version defined?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


jolshan commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270112505


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   Hmm -- if you look at all the snapshot commits, we've added this cutting the 
release branch for the previous version. (Ie, cutting the branch for the new 
trunk as per the commit comments) Seems like the top commit I linked should not 
have been reverted unless all of these were wrong.
   
https://github.com/apache/kafka/commit/dc1ede8d89d0964783302e0da9ead7fa1d76fbe4
   
https://github.com/apache/kafka/commit/c1a54671e8fc6c7daec5f5ec3d8c934be96b4989
   
https://github.com/apache/kafka/commit/6ace67b2de00f4f1665e7c9e3484e01b6d6f9584



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


jolshan commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270112505


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   Hmm -- if you look at all the snapshot commits, we've added this cutting the 
release branch for the previous version. (Ie, cutting the branch for the new 
trunk as per the commit comments)
   
https://github.com/apache/kafka/commit/dc1ede8d89d0964783302e0da9ead7fa1d76fbe4
   
https://github.com/apache/kafka/commit/c1a54671e8fc6c7daec5f5ec3d8c934be96b4989
   
https://github.com/apache/kafka/commit/6ace67b2de00f4f1665e7c9e3484e01b6d6f9584



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


jolshan commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270112505


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   Hmm -- if you look at all the snapshot commits we've added this cutting the 
release branch for the previous version. (Ie, cutting the branch for the new 
trunk as per the commit comments)
   
https://github.com/apache/kafka/commit/dc1ede8d89d0964783302e0da9ead7fa1d76fbe4
   
https://github.com/apache/kafka/commit/c1a54671e8fc6c7daec5f5ec3d8c934be96b4989
   
https://github.com/apache/kafka/commit/6ace67b2de00f4f1665e7c9e3484e01b6d6f9584



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


jolshan commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270112505


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   Hmm -- if you look at all the snapshot commits we've added this upon 
releasing the previous version. (Ie, cutting the branch for the new trunk as 
per the commit comments)
   
https://github.com/apache/kafka/commit/dc1ede8d89d0964783302e0da9ead7fa1d76fbe4
   
https://github.com/apache/kafka/commit/c1a54671e8fc6c7daec5f5ec3d8c934be96b4989
   
https://github.com/apache/kafka/commit/6ace67b2de00f4f1665e7c9e3484e01b6d6f9584



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mjsax commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


mjsax commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270108718


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   Yes -- 3.6.0 is not released yet. We should only need it after 3.6.0 is out.
   
   If you want to refer to current version it would be `DEV_VERSION` (ie, 
3.6.0-SNAPSHOT).
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mjsax commented on a diff in pull request #14030: KAFKA-15022: [3/N] use graph to compute rack aware assignment for active stateful tasks

2023-07-20 Thread via GitHub


mjsax commented on code in PR #14030:
URL: https://github.com/apache/kafka/pull/14030#discussion_r1270094068


##
streams/src/main/java/org/apache/kafka/streams/processor/internals/assignment/RackAwareTaskAssignor.java:
##
@@ -38,29 +43,34 @@
 
 public class RackAwareTaskAssignor {
 private static final Logger log = 
LoggerFactory.getLogger(RackAwareTaskAssignor.class);
+private static final int DEFAULT_STATEFUL_TRAFFIC_COST = 10;
+private static final int DEFAULT_STATEFUL_NON_OVERLAP_COST = 1;

Review Comment:
   Why is this not zero (as it is for stateless)?



##
streams/src/main/java/org/apache/kafka/streams/processor/internals/assignment/RackAwareTaskAssignor.java:
##
@@ -38,29 +43,34 @@
 
 public class RackAwareTaskAssignor {
 private static final Logger log = 
LoggerFactory.getLogger(RackAwareTaskAssignor.class);
+private static final int DEFAULT_STATEFUL_TRAFFIC_COST = 10;

Review Comment:
   Why is this set to `10`? In particular, why is it higher than for the 
stateless case? In the end, my understanding was that we try to optimize for 
input partitions, and for this case, there is no difference if a task has state 
or not, but only the number of input topic partitions for a task matter (each 
with equal cost)



##
streams/src/main/java/org/apache/kafka/streams/processor/internals/assignment/AssignorConfiguration.java:
##
@@ -268,24 +268,44 @@ public static class AssignmentConfigs {
 public final long probingRebalanceIntervalMs;
 public final List rackAwareAssignmentTags;
 
+// TODO: get from streamsConfig after we add the config

Review Comment:
   I cannot remember such parameters being defined in the KIP. Can you 
elaborate?



##
streams/src/main/java/org/apache/kafka/streams/processor/internals/assignment/RackAwareTaskAssignor.java:
##
@@ -185,4 +191,224 @@ public boolean validateClientRack() {
 }
 return true;
 }
+
+private int getCost(final TaskId taskId, final UUID clientId, final 
boolean inCurrentAssignment, final boolean isStateful) {
+final Map> clientRacks = 
racksForProcess.get(clientId);
+if (clientRacks == null) {
+throw new IllegalStateException("Client " + clientId + " doesn't 
exist in processRacks");
+}
+final Optional> clientRackOpt = 
clientRacks.values().stream().filter(Optional::isPresent).findFirst();
+if (!clientRackOpt.isPresent() || !clientRackOpt.get().isPresent()) {
+throw new IllegalStateException("Client " + clientId + " doesn't 
have rack configured. Maybe forgot to call canEnableRackAwareAssignor first");
+}
+
+final String clientRack = clientRackOpt.get().get();
+final Set topicPartitions = 
partitionsForTask.get(taskId);
+if (topicPartitions == null) {
+throw new IllegalStateException("Task " + taskId + " has no 
TopicPartitions");
+}
+
+final int trafficCost = assignmentConfigs.trafficCost == null ? 
(isStateful ? DEFAULT_STATEFUL_TRAFFIC_COST : DEFAULT_STATELESS_TRAFFIC_COST)
+: assignmentConfigs.trafficCost;
+final int nonOverlapCost = assignmentConfigs.nonOverlapCost == null ? 
(isStateful ? DEFAULT_STATEFUL_NON_OVERLAP_COST : 
DEFAULT_STATELESS_NON_OVERLAP_COST)
+: assignmentConfigs.nonOverlapCost;
+
+int cost = 0;
+for (final TopicPartition tp : topicPartitions) {
+final Set tpRacks = racksForPartition.get(tp);
+if (tpRacks == null || tpRacks.isEmpty()) {
+throw new IllegalStateException("TopicPartition " + tp + " has 
no rack information. Maybe forgot to call canEnableRackAwareAssignor first");
+}
+if (!tpRacks.contains(clientRack)) {
+cost += trafficCost;
+}
+}
+
+if (!inCurrentAssignment) {
+cost += nonOverlapCost;
+}
+
+return cost;
+}
+
+// For testing. canEnableRackAwareAssignor must be called first
+long activeTasksCost(final SortedMap clientStates, 
final SortedSet statefulTasks, final boolean isStateful) {
+final List clientList = new ArrayList<>(clientStates.keySet());
+final List taskIdList = new ArrayList<>(statefulTasks);
+final Map taskClientMap = new HashMap<>();
+final Map clientCapacity = new HashMap<>();
+final Graph graph = new Graph<>();
+
+constructStatefulActiveTaskGraph(graph, statefulTasks, clientList, 
taskIdList,
+clientStates, taskClientMap, clientCapacity, isStateful);
+
+final int sourceId = taskIdList.size() + clientList.size();
+final int sinkId = sourceId + 1;
+for (int taskNodeId = 0; taskNodeId < taskIdList.size(); taskNodeId++) 
{
+graph.addEdge(sourceId, taskNodeId, 1, 0, 1);
+}
+for (int i = 0; i < clientList.size(); i++) {
+final int capacity = 

[GitHub] [kafka] mjsax commented on a diff in pull request #13996: KAFKA-15022: [2/N] introduce graph to compute min cost

2023-07-20 Thread via GitHub


mjsax commented on code in PR #13996:
URL: https://github.com/apache/kafka/pull/13996#discussion_r1265999671


##
streams/src/main/java/org/apache/kafka/streams/processor/internals/assignment/Graph.java:
##
@@ -0,0 +1,367 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.internals.assignment;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.SortedSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.stream.Collectors;
+
+public class Graph> {
+public class Edge {
+final V destination;
+final int capacity;
+final int cost;
+int residualFlow;
+int flow;
+Edge counterEdge;
+boolean forwardEdge;
+
+public Edge(final V destination, final int capacity, final int cost, 
final int residualFlow, final int flow) {
+this(destination, capacity, cost, residualFlow, flow, true);
+}
+
+public Edge(final V destination, final int capacity, final int cost, 
final int residualFlow, final int flow,
+final boolean forwardEdge) {

Review Comment:
   nit: formatting (if it does not fit in one line, we should move each 
parameter into it's one line to simplify reading)
   ```
   public Edge(
   final V destination,
   final int capacity,
   final int cost,
   final int residualFlow,
   final int flow,
   final boolean forwardEdge
   ) {
   ```



##
streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/GraphTest.java:
##
@@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.internals.assignment;
+
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.hamcrest.Matchers.contains;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assertions.assertNull;
+import static org.junit.jupiter.api.Assertions.assertSame;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.junit.Before;
+import org.junit.Test;
+
+public class GraphTest {
+private Graph graph;
+
+@Before
+public void setUp() {
+/*
+ * Node 0 and 2 are both connected to node 1 and 3. There's a flow of 
1 unit from 0 to 1 and 2 to
+ * 3. The total cost in this case is 5. Min cost should be 2 by 
flowing 1 unit from 0 to 3 and 2
+ * to 1
+ */
+graph = new Graph<>();
+graph.addEdge(0, 1, 1, 3, 1);
+graph.addEdge(0, 3, 1, 1, 0);
+graph.addEdge(2, 1, 1, 1, 0);
+graph.addEdge(2, 3, 1, 2, 1);
+graph.addEdge(4, 0, 1, 0, 1);
+graph.addEdge(4, 2, 1, 0, 1);
+graph.addEdge(1, 5, 1, 0, 1);
+graph.addEdge(3, 5, 1, 0, 1);
+graph.setSourceNode(4);
+graph.setSinkNode(5);
+}
+
+@Test
+public void testBasic() {
+final Set nodes = graph.nodes();
+assertEquals(6, nodes.size());
+assertThat(nodes, contains(0, 1, 2, 3, 4, 5));
+
+Map.Edge> edges 

[GitHub] [kafka] philipnee commented on pull request #13797: KAFKA-14950: implement assign() and assignment()

2023-07-20 Thread via GitHub


philipnee commented on PR #13797:
URL: https://github.com/apache/kafka/pull/13797#issuecomment-1644739179

   Thanks, @junrao -The failing tests should be fixed in the latest commit: The 
integration test failed because of missing subscription state dependency, which 
was added in one of the subsequent PR.
   
   The failing tests are irrelevant, and these are:
   ```
   Build / JDK 8 and Scala 2.12 / testOffsetTranslationBehindReplicationFlow() 
– 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
   44s
   Build / JDK 11 and Scala 2.13 / testBalancePartitionLeaders() – 
org.apache.kafka.controller.QuorumControllerTest
   12s
   Build / JDK 20 and Scala 2.13 / [1] tlsProtocol=TLSv1.2, useInlinePem=false 
– org.apache.kafka.common.network.SslTransportLayerTest
   14s
   Build / JDK 20 and Scala 2.13 / testSyncTopicConfigs() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
   1m 56s
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15217) Consider usage of Gradle toolchain to specify Java version

2023-07-20 Thread Said BOUDJELDA (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745290#comment-17745290
 ] 

Said BOUDJELDA commented on KAFKA-15217:


I feel I can take this Jira if it's possible 

> Consider usage of Gradle toolchain to specify Java version
> --
>
> Key: KAFKA-15217
> URL: https://issues.apache.org/jira/browse/KAFKA-15217
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Divij Vaidya
>Priority: Minor
>
> We recently started using Gradle 8.2. It has a new feature called toolchains  
> [1]  which could be used in our project. This task is to explore that feature 
> consider it's usage for Kafka.
> [1] https://docs.gradle.org/8.2/userguide/toolchains.html 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15217) Consider usage of Gradle toolchain to specify Java version

2023-07-20 Thread Said BOUDJELDA (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745290#comment-17745290
 ] 

Said BOUDJELDA edited comment on KAFKA-15217 at 7/20/23 10:22 PM:
--

[~divijvaidya]  I feel I can take this Jira if it's possible 


was (Author: JIRAUSER301378):
I feel I can take this Jira if it's possible 

> Consider usage of Gradle toolchain to specify Java version
> --
>
> Key: KAFKA-15217
> URL: https://issues.apache.org/jira/browse/KAFKA-15217
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Divij Vaidya
>Priority: Minor
>
> We recently started using Gradle 8.2. It has a new feature called toolchains  
> [1]  which could be used in our project. This task is to explore that feature 
> consider it's usage for Kafka.
> [1] https://docs.gradle.org/8.2/userguide/toolchains.html 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] ahuang98 commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


ahuang98 commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270012540


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   guessing it was removed because there were no references to it, I'll add it 
back when I raise a PR for more comprehensive upgrade tests



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #13654: HOTFIX: fix broken Streams upgrade system test

2023-07-20 Thread via GitHub


jolshan commented on code in PR #13654:
URL: https://github.com/apache/kafka/pull/13654#discussion_r1270009709


##
tests/kafkatest/version.py:
##
@@ -249,7 +250,3 @@ def get_version(node=None):
 # 3.5.x versions
 V_3_5_0 = KafkaVersion("3.5.0")
 LATEST_3_5 = V_3_5_0
-
-# 3.6.x versions
-V_3_6_0 = KafkaVersion("3.6.0")

Review Comment:
   Here -- did we mean to remove?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1270007353


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/consumer/TopicMetadata.java:
##
@@ -40,23 +44,31 @@ public class TopicMetadata {
  */
 private final int numPartitions;
 
+/**
+ * Map of every partition to a set of its rackIds.
+ * If the rack information is unavailable, pass an empty set.
+ */
+private final Map> partitionRackInfo;

Review Comment:
   I also thought about it, but realized its not that necessary since 
PartitionMetadata is used to finally pass info to the assignor from 
targetAssignmentBuilder but TopicMetadata is used to get info from records till 
targetAssignmentBuilder. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1270002911


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/consumer/ConsumerGroup.java:
##
@@ -434,22 +437,36 @@ public void setSubscriptionMetadata(
 public Map computeSubscriptionMetadata(
 ConsumerGroupMember oldMember,
 ConsumerGroupMember newMember,
-TopicsImage topicsImage
+TopicsImage topicsImage,
+ClusterImage clusterImage

Review Comment:
   Since metadataImage has a lot more images that we don't need and passing the 
specific arguments makes it more readable, maintainable, and testable I feel 
like we can keep it as two separate arguments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] gharris1727 commented on pull request #14064: KAFKA-15030: Add connect-plugin-path command-line tool.

2023-07-20 Thread via GitHub


gharris1727 commented on PR #14064:
URL: https://github.com/apache/kafka/pull/14064#issuecomment-1644636341

   Here's some sample output from the current implementation:
   ```
   $ ./bin/connect-plugin-path.sh list --plugin-path ~/test/plugin-path/
   SLF4J: Class path contains multiple SLF4J bindings.
   SLF4J: Found binding in 
[jar:file:/Users/greg.harris/github/kafka/tools/build/dependant-libs-2.13.11/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
   SLF4J: Found binding in 
[jar:file:/Users/greg.harris/github/kafka/trogdor/build/dependant-libs-2.13.11/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
   SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
   SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
   /Users/greg.harris/test/plugin-path/a2solutions-oracdc-kafka-1.3.3.1
solutions.a2.cdc.oracle.OraCdcJdbcSinkConnector OraCdcJdbcSink  
OraCdcJdbcSinkConnector 1.3.3.1 sinktruefalse
   /Users/greg.harris/test/plugin-path/a2solutions-oracdc-kafka-1.3.3.1
solutions.a2.cdc.oracle.OraCdcLogMinerConnector OraCdcLogMinerConnector 
OraCdcLogMiner  1.3.3.1 source  truefalse
   /Users/greg.harris/test/plugin-path/a2solutions-oracdc-kafka-1.3.3.1
solutions.a2.cdc.oracle.OraCdcSourceConnector   OraCdcSource
OraCdcSourceConnector   1.3.3.1 source  truefalse
   /Users/greg.harris/test/plugin-path/microsoft-kafka-connect-iothub-0.6  
com.microsoft.azure.iot.kafka.connect.sink.IotHubSinkConnector  IotHubSink  
IotHubSinkConnector 0.6 sinktruefalse
   /Users/greg.harris/test/plugin-path/microsoft-kafka-connect-iothub-0.6  
com.microsoft.azure.iot.kafka.connect.IotHubSourceConnector 
IotHubSourceConnector   IotHubSource0.6 source  truefalse
   [2023-07-20 14:19:13,855] ERROR Failed to get plugin version for class 
com.github.jcustenborder.kafka.connect.spooldir.elf.SpoolDirELFSourceConnector 
(org.apache.kafka.connect.runtime.isolation.PluginScanner)
   java.lang.NoClassDefFoundError: com/google/common/base/Strings
   at 
com.github.jcustenborder.kafka.connect.utils.VersionUtil.version(VersionUtil.java:32)
   at 
com.github.jcustenborder.kafka.connect.spooldir.elf.SpoolDirELFSourceConnector.version(SpoolDirELFSourceConnector.java:57)
   at 
org.apache.kafka.connect.runtime.isolation.PluginScanner.versionFor(PluginScanner.java:199)
   at 
org.apache.kafka.connect.runtime.isolation.ReflectionScanner.versionFor(ReflectionScanner.java:74)
   at 
org.apache.kafka.connect.runtime.isolation.ReflectionScanner.getPluginDesc(ReflectionScanner.java:135)
   at 
org.apache.kafka.connect.runtime.isolation.ReflectionScanner.scanPlugins(ReflectionScanner.java:88)
   at 
org.apache.kafka.connect.runtime.isolation.PluginScanner.scanUrlsAndAddPlugins(PluginScanner.java:79)
   at 
org.apache.kafka.connect.runtime.isolation.PluginScanner.discoverPlugins(PluginScanner.java:67)
   at 
org.apache.kafka.tools.ConnectPluginPath.runCommand(ConnectPluginPath.java:213)
   at 
org.apache.kafka.tools.ConnectPluginPath.mainNoExit(ConnectPluginPath.java:79)
   at 
org.apache.kafka.tools.ConnectPluginPath.main(ConnectPluginPath.java:71)
   Caused by: java.lang.ClassNotFoundException: com.google.common.base.Strings
   at 
java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
   at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587)
   at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:136)
   at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
   ... 11 more
   ```
   
   The SLF4J lines and the ERROR logs are printed to stderr, so redirecting 
with `2> /dev/null` is sufficient to clean up the output. Unloadable plugins 
(due to class structure problems) are printed with error logs.
   
   Right now the stdout is formatted like a TSV, but without a header. The KIP 
only specifies that this output be human-readable, so i'm interested in 
changing this output slightly before merging. The columns are:
   
   * plugin location
   * full class name
   * simple name (if unique within the classpath+location), or null
   * pruned name (if unique within the classpath+location), or null
   * reported version, or undefined
   * plugin type (sink/source/converter/etc)
   * whether the plugin is reflectively loadable
   * whether the plugin has a manifest file
   
   Because the ReflectionScanner is used on individual locations one-at-a-time, 
the script is slow, but prints results after scanning each location. For the 
above, the results start to be printed within about 1/4 second, and the whole 
listing takes 30s for 150 components (400 plugins) downloaded from the internet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[GitHub] [kafka] gharris1727 opened a new pull request, #14064: KAFKA-15030: Add connect-plugin-path command-line tool.

2023-07-20 Thread via GitHub


gharris1727 opened a new pull request, #14064:
URL: https://github.com/apache/kafka/pull/14064

   This adds only the `list` subcommand, the `sync-manifests` subcommand will 
be in a follow-up PR.
   
   This includes new dependencies to the tools package on connect-runtime and 
connect-runtime-test in order to test the functionality of this command. New 
entry points for linux (.sh) and windows (bat) are added as well.
   
   This script runs the Reflective and ServiceLoader scans used at worker 
startup, as well as directly reading the manifest files. This is in order to 
find plugins which have manifests but are not loadable. The alternative of 
having the PluginScanner emit erroneous plugins would not include the locations 
of the manifest files without a custom ServiceLoader implementation. The 
locations of the manifest files are necessary later for the sync-manifests 
command, which will directly re-write these manifests.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269957881


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/RangeAssignor.java:
##
@@ -87,7 +87,7 @@ private Map> membersPerTopic(final 
AssignmentSpec assignmentS
 for (Uuid topicId : topics) {
 // Only topics that are present in both the subscribed topics 
list and the topic metadata should be
 // considered for assignment.
-if (assignmentSpec.topics().containsKey(topicId)) {
+if 
(assignmentTopicDescriber.subscribedTopicIds().contains(topicId)) {

Review Comment:
   We could do that but I think it's more intuitive to leave the subscribed 
topicIds method in the interface for people to implement. We also might need it 
in the other assignors and its prolly better to have a way to get all the 
available topicIds so I vote to keep it in.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269953544


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/PartitionMetadata.java:
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.coordinator.group.assignor;
+
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+
+public class PartitionMetadata {
+
+/**
+ * Partition number mapped to a set of racks where
+ * its replicas are located.
+ */
+Map> partitionsWithRacks;
+
+//If rack information isn't available pass an empty set.
+public PartitionMetadata (Map> partitionsWithRacks) {
+Objects.requireNonNull(partitionsWithRacks);
+this.partitionsWithRacks = partitionsWithRacks;
+}
+
+/**
+ * Returns the number of partitions.
+ *
+ * @return number of partitions associated with the topic.
+ */
+public int numPartitions() {
+return partitionsWithRacks.size();
+}
+
+/**
+ * Returns the rack information for the replicas of the given partition.
+ *
+ * @param partition partition number.
+ * @return Set of racks associated with the replicas of the given 
partition.
+ * If no rack information is available, an empty set is returned.
+ */
+public Set racks(int partition) {
+return partitionsWithRacks.get(partition);
+}
+
+@Override
+public boolean equals(Object o) {

Review Comment:
   auto-gen but fixing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269952375


##
group-coordinator/src/main/resources/common/message/ConsumerGroupPartitionMetadataValue.json:
##
@@ -29,7 +29,27 @@
   { "name": "TopicName", "versions": "0+", "type": "string",
 "about": "The topic name." },
   { "name": "NumPartitions", "versions": "0+", "type": "int32",
-"about": "The number of partitions of the topic." }
+"about": "The number of partitions of the topic." },
+  {

Review Comment:
   Its the same? except the comma because I had to add an extra field



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269947944


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/PartitionMetadata.java:
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.coordinator.group.assignor;
+
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+
+public class PartitionMetadata {
+
+/**
+ * Partition number mapped to a set of racks where
+ * its replicas are located.
+ */
+Map> partitionsWithRacks;

Review Comment:
   yes mb didn't add the access specifier 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269946491


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/PartitionAssignor.java:
##
@@ -36,8 +36,9 @@ public interface PartitionAssignor {
  * Perform the group assignment given the current members and
  * topic metadata.
  *
- * @param assignmentSpec The assignment spec.
+ * @param assignmentTopicDescriber The topic and cluster metadata 
describer {@link AssignmentTopicDescriber}.
+ * @param assignmentSpec The member assignment spec.
  * @return The new assignment for the group.
  */
-GroupAssignment assign(AssignmentSpec assignmentSpec) throws 
PartitionAssignorException;
+GroupAssignment assign(AssignmentTopicDescriber assignmentTopicDescriber, 
AssignmentSpec assignmentSpec) throws PartitionAssignorException;

Review Comment:
   The order prolly doesn't matter too much but yeah changed it just cause of 
the names the order you suggested makes sense



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15228) Add sync-manifests subcommand to connect-plugin-path tool

2023-07-20 Thread Greg Harris (Jira)
Greg Harris created KAFKA-15228:
---

 Summary: Add sync-manifests subcommand to connect-plugin-path tool
 Key: KAFKA-15228
 URL: https://issues.apache.org/jira/browse/KAFKA-15228
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect, tools
Reporter: Greg Harris
Assignee: Greg Harris
 Fix For: 3.6.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269933992


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/AssignmentSpec.java:
##
@@ -52,33 +42,22 @@ public Map members() {
 return members;
 }
 
-/**
- * @return Topic metadata keyed by topic Ids.
- */
-public Map topics() {
-return topics;
-}
-
 @Override
 public boolean equals(Object o) {
 if (this == o) return true;
-if (o == null || getClass() != o.getClass()) return false;
+if (!(o instanceof AssignmentSpec)) return false;

Review Comment:
   Understood that its cause it avoids the possibility of a 
NullPointerException by first checking if the passed object is null. I'll 
change it everywhere



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269930044


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/AssignmentTopicMetadata.java:
##
@@ -16,44 +16,83 @@
  */
 package org.apache.kafka.coordinator.group.assignor;
 
-/**
- * Metadata of a topic.
- */
-public class AssignmentTopicMetadata {
+import org.apache.kafka.common.Uuid;
+
+import java.util.Collections;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+
+public class AssignmentTopicMetadata implements AssignmentTopicDescriber {

Review Comment:
   TargetAssignmentBuilder is part of the consumer package though and I don't 
think this has anything to do with the consumer except for the fact that it has 
the metadata of topics that the consumer is subscribed to. Is this because 
we're gonna make the assignor package public and this class isn't?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269930044


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/AssignmentTopicMetadata.java:
##
@@ -16,44 +16,83 @@
  */
 package org.apache.kafka.coordinator.group.assignor;
 
-/**
- * Metadata of a topic.
- */
-public class AssignmentTopicMetadata {
+import org.apache.kafka.common.Uuid;
+
+import java.util.Collections;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+
+public class AssignmentTopicMetadata implements AssignmentTopicDescriber {

Review Comment:
   TargetAssignmentBuilder is part of the consumer package though and I don't 
think this has anything to do with the consumer except for the fact that its 
the topics that the consumer is subscribed to. Is this cause we're gonna make 
the assignor package public and this class isnt?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269927386


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/AssignmentTopicDescriber.java:
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.coordinator.group.assignor;
+
+import org.apache.kafka.common.Uuid;
+import org.apache.kafka.common.annotation.InterfaceStability;
+
+import java.util.Set;
+
+/**
+ * The assignment topic describer is used by the {@link PartitionAssignor}
+ * to obtain topic and partition metadata of subscribed topics.
+ *
+ * The interface is kept in an internal module until KIP-848 is fully
+ * implemented and ready to be released.
+ */
+@InterfaceStability.Unstable
+public interface AssignmentTopicDescriber {
+
+/**
+ * Returns a set of subscribed topicIds.
+ *
+ * @return Set of topicIds corresponding to the subscribed topics.
+ */
+Set subscribedTopicIds();
+
+/**
+ * Number of partitions for the given topicId.
+ *
+ * @param topicId   Uuid corresponding to the topic.
+ * @return The number of partitions corresponding to the given topicId.
+ * If the topicId doesn't exist return 0;
+ */
+int numPartitions(Uuid topicId);
+
+/**
+ * Returns all the racks associated with the replicas for the given 
partition.
+ *
+ * @param topicId   Uuid corresponding to the partition's topic.
+ * @param partition Partition number within topic.

Review Comment:
   partition number is used a lot throughout the kafka code and I thought it's 
easier to understand than Id even though they're interchangeable.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante commented on a diff in pull request #14005: KAFKA-15177: Implement KIP-875 SourceConnector::alterOffset API in MirrorMaker 2 connectors

2023-07-20 Thread via GitHub


C0urante commented on code in PR #14005:
URL: https://github.com/apache/kafka/pull/14005#discussion_r1269901272


##
connect/runtime/src/test/java/org/apache/kafka/connect/util/clusters/EmbeddedKafkaCluster.java:
##
@@ -597,7 +596,9 @@ private Set listPartitions(
 Admin admin,
 Collection topics
 ) throws TimeoutException, InterruptedException, ExecutionException {
-assertFalse("collection of topics may not be empty", topics.isEmpty());

Review Comment:
   Did the same in `assertConnectorAndExactlyNumTasksAreRunning`.



##
connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorUtils.java:
##
@@ -65,27 +72,170 @@ static String encodeTopicPartition(TopicPartition 
topicPartition) {
 
 static Map wrapPartition(TopicPartition topicPartition, 
String sourceClusterAlias) {
 Map wrapped = new HashMap<>();
-wrapped.put("topic", topicPartition.topic());
-wrapped.put("partition", topicPartition.partition());
-wrapped.put("cluster", sourceClusterAlias);
+wrapped.put(TOPIC_KEY, topicPartition.topic());
+wrapped.put(PARTITION_KEY, topicPartition.partition());
+wrapped.put(SOURCE_CLUSTER_KEY, sourceClusterAlias);
 return wrapped;
 }
 
-static Map wrapOffset(long offset) {
-return Collections.singletonMap("offset", offset);
+public static Map wrapOffset(long offset) {
+return Collections.singletonMap(OFFSET_KEY, offset);
 }
 
-static TopicPartition unwrapPartition(Map wrapped) {
-String topic = (String) wrapped.get("topic");
-int partition = (Integer) wrapped.get("partition");
+public static TopicPartition unwrapPartition(Map wrapped) {
+String topic = (String) wrapped.get(TOPIC_KEY);
+int partition = (Integer) wrapped.get(PARTITION_KEY);
 return new TopicPartition(topic, partition);
 }
 
 static Long unwrapOffset(Map wrapped) {
-if (wrapped == null || wrapped.get("offset") == null) {
+if (wrapped == null || wrapped.get(OFFSET_KEY) == null) {
 return -1L;
 }
-return (Long) wrapped.get("offset");
+return (Long) wrapped.get(OFFSET_KEY);
+}
+
+
+/**
+ * Validate a specific key in a source partition that may be written to 
the offsets topic for one of the MM2 connectors.
+ * This method ensures that the key is present in the source partition map 
and that its value is a string.
+ *
+ * @see org.apache.kafka.connect.source.SourceConnector#alterOffsets(Map, 
Map)
+ * @see SourceRecord#sourcePartition()
+ *
+ * @param sourcePartition the to-be-validated source partition; may not be 
null
+ * @param key the key to check for in the source partition; may be null
+ *
+ * @throws ConnectException if the offset is invalid
+ */
+static void validateSourcePartitionString(Map sourcePartition, 
String key) {
+Objects.requireNonNull(sourcePartition, "Source partition may not be 
null");
+
+if (!sourcePartition.containsKey(key))
+throw new ConnectException(String.format(
+"Source partition %s is missing the '%s' key, which is 
required",
+sourcePartition,
+key
+));
+
+Object value = sourcePartition.get(key);
+if (!(value instanceof String)) {
+throw new ConnectException(String.format(
+"Source partition %s has an invalid value %s for the '%s' 
key, which must be a string",
+sourcePartition,
+value,
+key
+));
+}
+}
+
+/**
+ * Validate the {@link #PARTITION_KEY partition key} in a source partition 
that may be written to the offsets topic
+ * for one of the MM2 connectors.
+ * This method ensures that the key is present in the source partition map 
and that its value is a non-negative integer.
+ * 
+ * Note that the partition key most likely refers to a partition in a 
Kafka topic, whereas the term "source partition" refers
+ * to a {@link SourceRecord#sourcePartition() source partition} that is 
stored in a Kafka Connect worker's internal offsets
+ * topic (or, if running in standalone mode, offsets file).
+ *
+ * @see org.apache.kafka.connect.source.SourceConnector#alterOffsets(Map, 
Map)
+ * @see SourceRecord#sourcePartition()
+ *
+ * @param sourcePartition the to-be-validated source partition; may not be 
null
+ *
+ * @throws ConnectException if the offset is invalid
+ */
+static void validateSourcePartitionPartition(Map 
sourcePartition) {
+Objects.requireNonNull(sourcePartition, "Source partition may not be 
null");
+
+if (!sourcePartition.containsKey(PARTITION_KEY))
+throw new ConnectException(String.format(
+"Source partition %s is missing the '%s' 

[GitHub] [kafka] gharris1727 commented on pull request #13313: KAFKA-14760: Move ThroughputThrottler from tools to clients, remove tools dependency from connect-runtime

2023-07-20 Thread via GitHub


gharris1727 commented on PR #13313:
URL: https://github.com/apache/kafka/pull/13313#issuecomment-1644520071

   Thanks for your help Ismael!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] gharris1727 merged pull request #13313: KAFKA-14760: Move ThroughputThrottler from tools to clients, remove tools dependency from connect-runtime

2023-07-20 Thread via GitHub


gharris1727 merged PR #13313:
URL: https://github.com/apache/kafka/pull/13313


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15227) Use plugin.discovery=SERVICE_LOAD in all plugin test suites

2023-07-20 Thread Greg Harris (Jira)
Greg Harris created KAFKA-15227:
---

 Summary: Use plugin.discovery=SERVICE_LOAD in all plugin test 
suites
 Key: KAFKA-15227
 URL: https://issues.apache.org/jira/browse/KAFKA-15227
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect
Reporter: Greg Harris
Assignee: Greg Harris


To speed up these tests where we know all plugins are migrated, use 
SERVICE_LOAD mode in all known test suites.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15226) System tests for plugin.discovery worker configuration

2023-07-20 Thread Greg Harris (Jira)
Greg Harris created KAFKA-15226:
---

 Summary: System tests for plugin.discovery worker configuration
 Key: KAFKA-15226
 URL: https://issues.apache.org/jira/browse/KAFKA-15226
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect
Reporter: Greg Harris
Assignee: Greg Harris


Add system tests as described in KIP-898, targeting the startup behavior of the 
connect worker, various states of plugin migration, and the migration script.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269846580


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/AssignmentTopicDescriber.java:
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.coordinator.group.assignor;
+
+import org.apache.kafka.common.Uuid;
+import org.apache.kafka.common.annotation.InterfaceStability;
+
+import java.util.Set;
+
+/**
+ * The assignment topic describer is used by the {@link PartitionAssignor}
+ * to obtain topic and partition metadata of subscribed topics.
+ *
+ * The interface is kept in an internal module until KIP-848 is fully
+ * implemented and ready to be released.
+ */
+@InterfaceStability.Unstable
+public interface AssignmentTopicDescriber {
+
+/**
+ * Returns a set of subscribed topicIds.
+ *
+ * @return Set of topicIds corresponding to the subscribed topics.
+ */
+Set subscribedTopicIds();
+
+/**
+ * Number of partitions for the given topicId.

Review Comment:
   It says topicId singular already, did we want a space between topic and Id
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269837065


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/AssignmentTopicDescriber.java:
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.coordinator.group.assignor;
+
+import org.apache.kafka.common.Uuid;
+import org.apache.kafka.common.annotation.InterfaceStability;
+
+import java.util.Set;
+
+/**
+ * The assignment topic describer is used by the {@link PartitionAssignor}
+ * to obtain topic and partition metadata of subscribed topics.
+ *
+ * The interface is kept in an internal module until KIP-848 is fully
+ * implemented and ready to be released.
+ */
+@InterfaceStability.Unstable
+public interface AssignmentTopicDescriber {

Review Comment:
   Yeah I named it this way cause I was just wondering if it'd be more uniform 
with assignmentSpec but I'll change it cause I agree



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15225) Define constants for record types

2023-07-20 Thread David Jacot (Jira)
David Jacot created KAFKA-15225:
---

 Summary: Define constants for record types
 Key: KAFKA-15225
 URL: https://issues.apache.org/jira/browse/KAFKA-15225
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot


Define constants for all the record types. Ideally, this should be defined in 
the record definitions and the constants should be auto-generated (e.g. like 
ApiKeys).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] dajac commented on a diff in pull request #14047: KAFKA-14499: [2/N] Add OffsetCommit record & related

2023-07-20 Thread via GitHub


dajac commented on code in PR #14047:
URL: https://github.com/apache/kafka/pull/14047#discussion_r1269799340


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/RecordHelpers.java:
##
@@ -467,6 +471,72 @@ public static Record newEmptyGroupMetadataRecord(
 );
 }
 
+/**
+ * Creates an OffsetCommit record.
+ *
+ * @param groupId   The group id.
+ * @param topic The topic name.
+ * @param partitionId   The partition id.
+ * @param offsetAndMetadata The offset and metadata.
+ * @param metadataVersion   The metadata version.
+ * @return The record.
+ */
+public static Record newOffsetCommitRecord(
+String groupId,
+String topic,
+int partitionId,
+OffsetAndMetadata offsetAndMetadata,
+MetadataVersion metadataVersion
+) {
+short version = offsetAndMetadata.expireTimestampMs.isPresent() ?
+(short) 1 : metadataVersion.offsetCommitValueVersion();
+
+return new Record(
+new ApiMessageAndVersion(
+new OffsetCommitKey()
+.setGroup(groupId)
+.setTopic(topic)
+.setPartition(partitionId),
+(short) 1

Review Comment:
   Filed https://issues.apache.org/jira/browse/KAFKA-15225 for this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on pull request #13920: KAFKA-15106 fix AbstractStickyAssignor isBalanced predict

2023-07-20 Thread via GitHub


rreddy-22 commented on PR #13920:
URL: https://github.com/apache/kafka/pull/13920#issuecomment-1644359975

   Looks good to me! Thanks @flashmouse for the changes and replies! @dajac is 
a committer so he'll give the final approval!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] msn-tldr opened a new pull request, #14063: Kip951 poc

2023-07-20 Thread via GitHub


msn-tldr opened a new pull request, #14063:
URL: https://github.com/apache/kafka/pull/14063

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on pull request #14046: KAFKA-14499: [1/N] Introduce OffsetCommit API version 9 and add new StaleMemberEpochException error

2023-07-20 Thread via GitHub


jolshan commented on PR #14046:
URL: https://github.com/apache/kafka/pull/14046#issuecomment-1644292908

   Looking at the tests `[Build / JDK 20 and Scala 2.13 / 
kafka.server.FetchRequestTest.testCurrentEpochValidationV12()](https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14046/7/testReport/junit/kafka.server/FetchRequestTest/Build___JDK_20_and_Scala_2_13___testCurrentEpochValidationV12__/)`
 is a bit strange but it only failed on that version. Everything else seems to 
be familiar-ish flakes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14047: KAFKA-14499: [2/N] Add OffsetCommit record & related

2023-07-20 Thread via GitHub


dajac commented on code in PR #14047:
URL: https://github.com/apache/kafka/pull/14047#discussion_r1269744467


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/RecordHelpers.java:
##
@@ -467,6 +471,72 @@ public static Record newEmptyGroupMetadataRecord(
 );
 }
 
+/**
+ * Creates an OffsetCommit record.
+ *
+ * @param groupId   The group id.
+ * @param topic The topic name.
+ * @param partitionId   The partition id.
+ * @param offsetAndMetadata The offset and metadata.
+ * @param metadataVersion   The metadata version.
+ * @return The record.
+ */
+public static Record newOffsetCommitRecord(
+String groupId,
+String topic,
+int partitionId,
+OffsetAndMetadata offsetAndMetadata,
+MetadataVersion metadataVersion
+) {
+short version = offsetAndMetadata.expireTimestampMs.isPresent() ?
+(short) 1 : metadataVersion.offsetCommitValueVersion();
+
+return new Record(
+new ApiMessageAndVersion(
+new OffsetCommitKey()
+.setGroup(groupId)
+.setTopic(topic)
+.setPartition(partitionId),
+(short) 1

Review Comment:
   I actually used the value on purpose vs using something like 
`ConsumerGroupTargetAssignmentMemberKey.HIGHEST_SUPPORTED_VERSION ` in order to 
not change it by mistake.
   
   I wanted to rework the format of those records to include an api key and to 
auto-generate the constants based on it. In the mean time, we could define them 
manually. Do you mind if I address this separably though? I will do it for all 
the records at once.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14047: KAFKA-14499: [2/N] Add OffsetCommit record & related

2023-07-20 Thread via GitHub


dajac commented on code in PR #14047:
URL: https://github.com/apache/kafka/pull/14047#discussion_r1269744467


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/RecordHelpers.java:
##
@@ -467,6 +471,72 @@ public static Record newEmptyGroupMetadataRecord(
 );
 }
 
+/**
+ * Creates an OffsetCommit record.
+ *
+ * @param groupId   The group id.
+ * @param topic The topic name.
+ * @param partitionId   The partition id.
+ * @param offsetAndMetadata The offset and metadata.
+ * @param metadataVersion   The metadata version.
+ * @return The record.
+ */
+public static Record newOffsetCommitRecord(
+String groupId,
+String topic,
+int partitionId,
+OffsetAndMetadata offsetAndMetadata,
+MetadataVersion metadataVersion
+) {
+short version = offsetAndMetadata.expireTimestampMs.isPresent() ?
+(short) 1 : metadataVersion.offsetCommitValueVersion();
+
+return new Record(
+new ApiMessageAndVersion(
+new OffsetCommitKey()
+.setGroup(groupId)
+.setTopic(topic)
+.setPartition(partitionId),
+(short) 1

Review Comment:
   I actually used the value on purpose vs using something like 
`ConsumerGroupTargetAssignmentMemberKey.HIGHEST_SUPPORTED_VERSION ` in order to 
not change it by mistake.
   
   I wanted to rework the format of those records to include an api key and to 
auto-generate the constants based on it. In the mean time, we could define them 
manually. Do you mind if I address separably though? I will do it for all the 
records at once.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14047: KAFKA-14499: [2/N] Add OffsetCommit record & related

2023-07-20 Thread via GitHub


dajac commented on code in PR #14047:
URL: https://github.com/apache/kafka/pull/14047#discussion_r1269737507


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/RecordHelpers.java:
##
@@ -467,6 +471,72 @@ public static Record newEmptyGroupMetadataRecord(
 );
 }
 
+/**
+ * Creates an OffsetCommit record.
+ *
+ * @param groupId   The group id.
+ * @param topic The topic name.
+ * @param partitionId   The partition id.
+ * @param offsetAndMetadata The offset and metadata.
+ * @param metadataVersion   The metadata version.
+ * @return The record.
+ */
+public static Record newOffsetCommitRecord(
+String groupId,
+String topic,
+int partitionId,
+OffsetAndMetadata offsetAndMetadata,
+MetadataVersion metadataVersion
+) {
+short version = offsetAndMetadata.expireTimestampMs.isPresent() ?
+(short) 1 : metadataVersion.offsetCommitValueVersion();

Review Comment:
   Yeah, I was debating whether the 
`offsetAndMetadata.expireTimestampMs.isPresent()` part of this should be in 
MetadataVersion or not. I could pass a boolean for this purpose.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14046: KAFKA-14499: [1/N] Introduce OffsetCommit API version 9 and add new StaleMemberEpochException error

2023-07-20 Thread via GitHub


dajac commented on code in PR #14046:
URL: https://github.com/apache/kafka/pull/14046#discussion_r1269732280


##
clients/src/main/resources/common/message/OffsetCommitRequest.json:
##
@@ -31,13 +31,19 @@
   // version 7 adds a new field called groupInstanceId to indicate member 
identity across restarts.
   //
   // Version 8 is the first flexible version.
-  "validVersions": "0-8",
+  //
+  // Version 9 is the first version that can be used with the new consumer 
group protocol (KIP-848). The
+  // request is the same as version 8.
+  // Version 9 is added as part of KIP-848 and is still under development. 
Hence, the last version of the
+  // API is not exposed by default by brokers unless explicitly enabled.
+  "latestVersionUnstable": true,
+  "validVersions": "0-9",
   "flexibleVersions": "8+",
   "fields": [
 { "name": "GroupId", "type": "string", "versions": "0+", "entityType": 
"groupId",
   "about": "The unique group identifier." },
-{ "name": "GenerationId", "type": "int32", "versions": "1+", "default": 
"-1", "ignorable": true,
-  "about": "The generation of the group." },
+{ "name": "GenerationIdOrMemberEpoch", "type": "int32", "versions": "1+", 
"default": "-1", "ignorable": true,
+  "about": "The generation of the group if the generic group protocol or 
the member epoch if the consumer protocol." },

Review Comment:
   It based on the type of the group. In the new group coordinator, we have two 
types: generic (the old protocol) and consumer (the new protocol). 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on pull request #14046: KAFKA-14499: [1/N] Introduce OffsetCommit API version 9 and add new StaleMemberEpochException error

2023-07-20 Thread via GitHub


jolshan commented on PR #14046:
URL: https://github.com/apache/kafka/pull/14046#issuecomment-1644264091

   > @jolshan I was actually thinking about the AuthorizerIntegrationTest 
failures overnight and I found an issue with the latestVersionUnstable flag. 
Let me try to explain.
   
   I was curious if the unstable version flag was causing issues since I recall 
some weirdness in tests when I had an unstable version. Makes sense to require 
the unstable-ness to be explicit, but I will take a second look.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mumrah commented on a diff in pull request #14047: KAFKA-14499: [2/N] Add OffsetCommit record & related

2023-07-20 Thread via GitHub


mumrah commented on code in PR #14047:
URL: https://github.com/apache/kafka/pull/14047#discussion_r1269713183


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/RecordHelpers.java:
##
@@ -467,6 +471,72 @@ public static Record newEmptyGroupMetadataRecord(
 );
 }
 
+/**
+ * Creates an OffsetCommit record.
+ *
+ * @param groupId   The group id.
+ * @param topic The topic name.
+ * @param partitionId   The partition id.
+ * @param offsetAndMetadata The offset and metadata.
+ * @param metadataVersion   The metadata version.
+ * @return The record.
+ */
+public static Record newOffsetCommitRecord(
+String groupId,
+String topic,
+int partitionId,
+OffsetAndMetadata offsetAndMetadata,
+MetadataVersion metadataVersion
+) {
+short version = offsetAndMetadata.expireTimestampMs.isPresent() ?
+(short) 1 : metadataVersion.offsetCommitValueVersion();
+
+return new Record(
+new ApiMessageAndVersion(
+new OffsetCommitKey()
+.setGroup(groupId)
+.setTopic(topic)
+.setPartition(partitionId),
+(short) 1

Review Comment:
   Can we define these `(short) 1` as a constant? That might reduce the changes 
of us changing one without the others in the future



##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/RecordHelpers.java:
##
@@ -467,6 +471,72 @@ public static Record newEmptyGroupMetadataRecord(
 );
 }
 
+/**
+ * Creates an OffsetCommit record.
+ *
+ * @param groupId   The group id.
+ * @param topic The topic name.
+ * @param partitionId   The partition id.
+ * @param offsetAndMetadata The offset and metadata.
+ * @param metadataVersion   The metadata version.
+ * @return The record.
+ */
+public static Record newOffsetCommitRecord(
+String groupId,
+String topic,
+int partitionId,
+OffsetAndMetadata offsetAndMetadata,
+MetadataVersion metadataVersion
+) {
+short version = offsetAndMetadata.expireTimestampMs.isPresent() ?
+(short) 1 : metadataVersion.offsetCommitValueVersion();

Review Comment:
   Would it make sense to relocate this logic and the linked logic into 
MetadataVersion?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jolshan commented on a diff in pull request #14046: KAFKA-14499: [1/N] Introduce OffsetCommit API version 9 and add new StaleMemberEpochException error

2023-07-20 Thread via GitHub


jolshan commented on code in PR #14046:
URL: https://github.com/apache/kafka/pull/14046#discussion_r1269711912


##
clients/src/main/resources/common/message/OffsetCommitRequest.json:
##
@@ -31,13 +31,19 @@
   // version 7 adds a new field called groupInstanceId to indicate member 
identity across restarts.
   //
   // Version 8 is the first flexible version.
-  "validVersions": "0-8",
+  //
+  // Version 9 is the first version that can be used with the new consumer 
group protocol (KIP-848). The
+  // request is the same as version 8.
+  // Version 9 is added as part of KIP-848 and is still under development. 
Hence, the last version of the
+  // API is not exposed by default by brokers unless explicitly enabled.
+  "latestVersionUnstable": true,
+  "validVersions": "0-9",
   "flexibleVersions": "8+",
   "fields": [
 { "name": "GroupId", "type": "string", "versions": "0+", "entityType": 
"groupId",
   "about": "The unique group identifier." },
-{ "name": "GenerationId", "type": "int32", "versions": "1+", "default": 
"-1", "ignorable": true,
-  "about": "The generation of the group." },
+{ "name": "GenerationIdOrMemberEpoch", "type": "int32", "versions": "1+", 
"default": "-1", "ignorable": true,
+  "about": "The generation of the group if the generic group protocol or 
the member epoch if the consumer protocol." },

Review Comment:
   The new group coordinator uses the member epoch and the old one uses the 
generation id I believe.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mumrah commented on a diff in pull request #14046: KAFKA-14499: [1/N] Introduce OffsetCommit API version 9 and add new StaleMemberEpochException error

2023-07-20 Thread via GitHub


mumrah commented on code in PR #14046:
URL: https://github.com/apache/kafka/pull/14046#discussion_r1269710067


##
clients/src/main/java/org/apache/kafka/common/errors/StaleMemberEpochException.java:
##
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.errors;
+
+import org.apache.kafka.common.annotation.InterfaceStability;
+
+@InterfaceStability.Evolving
+public class StaleMemberEpochException extends ApiException {

Review Comment:
   I know there isn't much precedent for this, but it might be useful to 
include a doc string here explaining which RPC this error is used in and at 
what version



##
clients/src/main/resources/common/message/OffsetCommitRequest.json:
##
@@ -31,13 +31,19 @@
   // version 7 adds a new field called groupInstanceId to indicate member 
identity across restarts.
   //
   // Version 8 is the first flexible version.
-  "validVersions": "0-8",
+  //
+  // Version 9 is the first version that can be used with the new consumer 
group protocol (KIP-848). The
+  // request is the same as version 8.
+  // Version 9 is added as part of KIP-848 and is still under development. 
Hence, the last version of the
+  // API is not exposed by default by brokers unless explicitly enabled.
+  "latestVersionUnstable": true,
+  "validVersions": "0-9",
   "flexibleVersions": "8+",
   "fields": [
 { "name": "GroupId", "type": "string", "versions": "0+", "entityType": 
"groupId",
   "about": "The unique group identifier." },
-{ "name": "GenerationId", "type": "int32", "versions": "1+", "default": 
"-1", "ignorable": true,
-  "about": "The generation of the group." },
+{ "name": "GenerationIdOrMemberEpoch", "type": "int32", "versions": "1+", 
"default": "-1", "ignorable": true,
+  "about": "The generation of the group if the generic group protocol or 
the member epoch if the consumer protocol." },

Review Comment:
   How does the server decide to interpret this value as a GenerationId vs a 
MemberEpoch? Is it based on the API version used?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] mumrah opened a new pull request, #14062: MINOR: Add a Builder for KRaftMigrationDriver

2023-07-20 Thread via GitHub


mumrah opened a new pull request, #14062:
URL: https://github.com/apache/kafka/pull/14062

   The number of arguments for KRaftMigrationDriver has grown rather large and 
there are already two constructors. This PR refactors the class to have a 
single package-private constructor and a builder that can be used by tests and 
ControllerServer. 
   
   No other changes in this patch, just refactoring the constructor.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] junrao commented on pull request #13990: KAFKA-14937: Refactoring for client code to reduce boilerplate

2023-07-20 Thread via GitHub


junrao commented on PR #13990:
URL: https://github.com/apache/kafka/pull/13990#issuecomment-1644221461

   @kirktrue : It seems there were 4 test failures for jdk 11. But the tests 
for jdk 17 and 20 were aborted. Do you know why?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya merged pull request #13874: KAFKA-14133: Migrate various mocks in TaskManagerTest to Mockito

2023-07-20 Thread via GitHub


divijvaidya merged PR #13874:
URL: https://github.com/apache/kafka/pull/13874


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #13998: KAFKA-14702: Extend server side assignor to support rack aware replica placement

2023-07-20 Thread via GitHub


rreddy-22 commented on code in PR #13998:
URL: https://github.com/apache/kafka/pull/13998#discussion_r1269665264


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/AssignmentSpec.java:
##
@@ -52,33 +42,22 @@ public Map members() {
 return members;
 }
 
-/**
- * @return Topic metadata keyed by topic Ids.
- */
-public Map topics() {
-return topics;
-}
-
 @Override
 public boolean equals(Object o) {
 if (this == o) return true;
-if (o == null || getClass() != o.getClass()) return false;
+if (!(o instanceof AssignmentSpec)) return false;

Review Comment:
   Sry I just auto-generated these functions, is there a reason why one is 
better than the other?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] gharris1727 commented on pull request #13313: KAFKA-14760: Move ThroughputThrottler from tools to clients, remove tools dependency from connect-runtime

2023-07-20 Thread via GitHub


gharris1727 commented on PR #13313:
URL: https://github.com/apache/kafka/pull/13313#issuecomment-1644153313

   @ijuma Could you take another look at this? This is blocking KIP-898 that 
I'm trying to get landed in time for 3.6.0. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on a diff in pull request #14057: KAFKA-15194-Prepend-Offset-as-Filename

2023-07-20 Thread via GitHub


divijvaidya commented on code in PR #14057:
URL: https://github.com/apache/kafka/pull/14057#discussion_r1269646588


##
storage/src/test/java/org/apache/kafka/server/log/remote/storage/RemoteLogSegmentFileset.java:
##
@@ -59,9 +59,9 @@
  * the local tiered storage:
  *
  * 
- * / storage-directory / topic-partition-uuidBase64 / 
oAtiIQ95REujbuzNd_lkLQ.log
- *  . 
oAtiIQ95REujbuzNd_lkLQ.index
- *  . 
oAtiIQ95REujbuzNd_lkLQ.timeindex
+ * / storage-directory / topic-partition-uuidBase64 / 
startOffset-oAtiIQ95REujbuzNd_lkLQ.log

Review Comment:
   nit
   
   Please replace "startOffset" with dummy values.



##
storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorageTest.java:
##
@@ -399,20 +403,21 @@ public Verifier(final LocalTieredStorage remoteStorage, 
final TopicIdPartition t
 this.topicIdPartition = requireNonNull(topicIdPartition);
 }
 
-private List expectedPaths(final RemoteLogSegmentId id) {
+private List expectedPaths(final RemoteLogSegmentMetadata 
metadata) {
 final String rootPath = getStorageRootDirectory();
 TopicPartition tp = topicIdPartition.topicPartition();
 final String topicPartitionSubpath = format("%s-%d-%s", 
tp.topic(), tp.partition(),
 topicIdPartition.topicId());
-final String uuid = id.id().toString();
+final String uuid = metadata.remoteLogSegmentId().id().toString();
+final long startOffset = metadata.startOffset();
 
 return Arrays.asList(
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.LOG_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TIME_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TXN_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LEADER_EPOCH_CHECKPOINT.getSuffix()),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.PRODUCER_SNAPSHOT_FILE_SUFFIX)
+Paths.get(rootPath, topicPartitionSubpath, startOffset + 
"-" + uuid + LogFileUtils.LOG_FILE_SUFFIX),

Review Comment:
   > I don't think it will allow us to insert a uuid in the middle as part of 
the filename.
   
   Ack. I missed that.
   
   > maybe we should make LogFileUtils#filenamePrefixFromOffset(long offset) as 
a public method so that we can construct a real offset using this method. What 
do you think ?
   
   Yes please. Let's use that.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] gharris1727 commented on pull request #13313: KAFKA-14760: Move ThroughputThrottler from tools to clients, remove tools dependency from connect-runtime

2023-07-20 Thread via GitHub


gharris1727 commented on PR #13313:
URL: https://github.com/apache/kafka/pull/13313#issuecomment-1644151941

   I ran a full system test run:
   
   ```
   

   SESSION REPORT (ALL TESTS)
   ducktape version: 0.11.3
   session_id:   2023-07-18--002
   run time: 1602 minutes 29.170 seconds
   tests run:1096
   passed:   786
   flaky:0
   failed:   20
   ignored:  290
   

   ```
   
   With the following failed tests:
   ```
   
'tests/kafkatest/tests/core/throttling_test.py::ThrottlingTest.test_throttled_reassignment@{"bounce_brokers":true}'
 
   
'tests/kafkatest/tests/core/mirror_maker_test.py::TestMirrorMakerService.test_bounce@{"clean_shutdown":false,"security_protocol":"SASL_PLAINTEXT"}'
 
   
'tests/kafkatest/tests/core/mirror_maker_test.py::TestMirrorMakerService.test_bounce@{"clean_shutdown":false,"security_protocol":"SASL_SSL"}'
 
   
'tests/kafkatest/tests/core/reassign_partitions_test.py::ReassignPartitionsTest.test_reassign_partitions@{"bounce_brokers":false,"reassign_from_offset_zero":false,"metadata_quorum":"REMOTE_KRAFT"}'
 
   
'tests/kafkatest/tests/core/reassign_partitions_test.py::ReassignPartitionsTest.test_reassign_partitions@{"bounce_brokers":true,"reassign_from_offset_zero":false,"metadata_quorum":"REMOTE_KRAFT"}'
 
   
'tests/kafkatest/tests/core/reassign_partitions_test.py::ReassignPartitionsTest.test_reassign_partitions@{"bounce_brokers":true,"reassign_from_offset_zero":false,"metadata_quorum":"ZK"}'
 
   
'tests/kafkatest/tests/streams/streams_smoke_test.py::StreamsSmokeTest.test_streams@{"processing_guarantee":"at_least_once","crash":false,"metadata_quorum":"REMOTE_KRAFT"}'
 
   
'tests/kafkatest/tests/core/mirror_maker_test.py::TestMirrorMakerService.test_bounce@{"clean_shutdown":false,"security_protocol":"PLAINTEXT"}'
 
   
'tests/kafkatest/tests/core/mirror_maker_test.py::TestMirrorMakerService.test_bounce@{"clean_shutdown":false,"security_protocol":"SSL"}'
 
   
'tests/kafkatest/tests/tools/replica_verification_test.py::ReplicaVerificationToolTest.test_replica_lags@{"metadata_quorum":"REMOTE_KRAFT"}'
 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"(user,
 client-id)","override_quota":false}' 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"(user,
 client-id)","override_quota":true}' 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"client-id","consumer_num":2}'
 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"client-id","old_broker_throttling_behavior":true}'
 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"client-id","old_client_throttling_behavior":true}'
 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"client-id","override_quota":false}'
 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"client-id","override_quota":true}'
 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"user","override_quota":false}'
 
   
'tests/kafkatest/tests/client/quota_test.py::QuotaTest.test_quota@{"quota_type":"user","override_quota":true}'
 
   
'tests/kafkatest/tests/core/network_degrade_test.py::NetworkDegradeTest.test_rate@{"task_name":"rate-1000-latency-50","device_name":"eth0","latency_ms":50,"rate_limit_kbit":100}'
   ```
   
   None of which make use of the 0.8.2.x artifacts version which is being 
affected here. In particular, the test which I was concerned about 
(upgrade_test.py 
from_kafka_version=0.8.2.2.to_message_format_version=None.compression_types=.snappy
 FAIL) does pass on this i86_64 machine when it failed on my arm64 machine, 
indicating that the failure was due to native library dependencies missing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] bmscomp commented on pull request #14060: KAFKA-15222: Upgrade zinc Scala incremental compiler plugin version to a latests stable fit version (1.9.2)

2023-07-20 Thread via GitHub


bmscomp commented on PR #14060:
URL: https://github.com/apache/kafka/pull/14060#issuecomment-1644078470

   @It's ok now, things seems more stable, but there is some failure on 
building kafak with jdk 20 that has no relation with zinc compiler, 
   
   Notice that for all build the retry_zinc step is an without issue 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] yashmayya commented on pull request #14024: KAFKA-13431: Expose the original pre-transform topic partition and offset in sink records

2023-07-20 Thread via GitHub


yashmayya commented on PR #14024:
URL: https://github.com/apache/kafka/pull/14024#issuecomment-1644061803

   Thanks Chris, I've rebased this on the latest `trunk`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15216) InternalSinkRecord::newRecord method ignores the headers argument

2023-07-20 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton updated KAFKA-15216:
--
Fix Version/s: 3.3.3

> InternalSinkRecord::newRecord method ignores the headers argument
> -
>
> Key: KAFKA-15216
> URL: https://issues.apache.org/jira/browse/KAFKA-15216
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Yash Mayya
>Assignee: Yash Mayya
>Priority: Minor
> Fix For: 3.3.3, 3.6.0, 3.4.2, 3.5.2
>
>
> [https://github.com/apache/kafka/blob/a1f6ab69387deb10988461152a0087f0cd2827c4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/InternalSinkRecord.java#L50-L56]
>  - the headers argument passed to the {{InternalSinkRecord}} constructor is 
> the instance field via the accessor {{headers()}} method instead of the 
> {{newRecord}} method's {{headers}} argument value.
>  
> Originally discovered 
> [here.|https://github.com/apache/kafka/pull/14024#discussion_r1266917499]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15216) InternalSinkRecord::newRecord method ignores the headers argument

2023-07-20 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton updated KAFKA-15216:
--
Fix Version/s: 3.4.2

> InternalSinkRecord::newRecord method ignores the headers argument
> -
>
> Key: KAFKA-15216
> URL: https://issues.apache.org/jira/browse/KAFKA-15216
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Yash Mayya
>Assignee: Yash Mayya
>Priority: Minor
> Fix For: 3.6.0, 3.4.2, 3.5.2
>
>
> [https://github.com/apache/kafka/blob/a1f6ab69387deb10988461152a0087f0cd2827c4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/InternalSinkRecord.java#L50-L56]
>  - the headers argument passed to the {{InternalSinkRecord}} constructor is 
> the instance field via the accessor {{headers()}} method instead of the 
> {{newRecord}} method's {{headers}} argument value.
>  
> Originally discovered 
> [here.|https://github.com/apache/kafka/pull/14024#discussion_r1266917499]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15216) InternalSinkRecord::newRecord method ignores the headers argument

2023-07-20 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton updated KAFKA-15216:
--
Fix Version/s: 3.5.2

> InternalSinkRecord::newRecord method ignores the headers argument
> -
>
> Key: KAFKA-15216
> URL: https://issues.apache.org/jira/browse/KAFKA-15216
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Yash Mayya
>Assignee: Yash Mayya
>Priority: Minor
> Fix For: 3.6.0, 3.5.2
>
>
> [https://github.com/apache/kafka/blob/a1f6ab69387deb10988461152a0087f0cd2827c4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/InternalSinkRecord.java#L50-L56]
>  - the headers argument passed to the {{InternalSinkRecord}} constructor is 
> the instance field via the accessor {{headers()}} method instead of the 
> {{newRecord}} method's {{headers}} argument value.
>  
> Originally discovered 
> [here.|https://github.com/apache/kafka/pull/14024#discussion_r1266917499]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14669) Include MirrorMaker connector configurations in docs

2023-07-20 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-14669.
---
Resolution: Done

> Include MirrorMaker connector configurations in docs
> 
>
> Key: KAFKA-14669
> URL: https://issues.apache.org/jira/browse/KAFKA-14669
> Project: Kafka
>  Issue Type: Improvement
>  Components: docs
>Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
> Fix For: 3.6.0
>
>
> In the https://kafka.apache.org/documentation/#georeplication-flow-configure 
> section we list some of the MirrorMaker connectors configurations. These are 
> hardcoded in the docs: 
> https://github.com/apache/kafka/blob/trunk/docs/ops.html#L768-L788
> Instead we should used the generated docs (added as part of 
> https://github.com/apache/kafka/commit/40af3a74507cce9155f4fb4fca317d3c68235d78)
>  like we do for the file connectors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14669) Include MirrorMaker connector configurations in docs

2023-07-20 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton updated KAFKA-14669:
--
Priority: Major  (was: Blocker)

> Include MirrorMaker connector configurations in docs
> 
>
> Key: KAFKA-14669
> URL: https://issues.apache.org/jira/browse/KAFKA-14669
> Project: Kafka
>  Issue Type: Improvement
>  Components: docs
>Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
> Fix For: 3.6.0
>
>
> In the https://kafka.apache.org/documentation/#georeplication-flow-configure 
> section we list some of the MirrorMaker connectors configurations. These are 
> hardcoded in the docs: 
> https://github.com/apache/kafka/blob/trunk/docs/ops.html#L768-L788
> Instead we should used the generated docs (added as part of 
> https://github.com/apache/kafka/commit/40af3a74507cce9155f4fb4fca317d3c68235d78)
>  like we do for the file connectors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] C0urante merged pull request #14041: KAFKA-14469: Add MirrorMaker 2 configs to table of contents in docs page

2023-07-20 Thread via GitHub


C0urante merged PR #14041:
URL: https://github.com/apache/kafka/pull/14041


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] C0urante merged pull request #14044: KAFKA-15216: InternalSinkRecord::newRecord should not ignore new headers

2023-07-20 Thread via GitHub


C0urante merged PR #14044:
URL: https://github.com/apache/kafka/pull/14044


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna merged pull request #13942: KAFKA-14936: Check the versioned table's history retention and compare to grace period (4/N)

2023-07-20 Thread via GitHub


cadonna merged PR #13942:
URL: https://github.com/apache/kafka/pull/13942


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna commented on pull request #13942: KAFKA-14936: Check the versioned table's history retention and compare to grace period (4/N)

2023-07-20 Thread via GitHub


cadonna commented on PR #13942:
URL: https://github.com/apache/kafka/pull/13942#issuecomment-1643982125

   Build failures are unrelated:
   ```
   Build / JDK 20 and Scala 2.13 / 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOffsetTranslationBehindReplicationFlow()
   Build / JDK 20 and Scala 2.13 / 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOffsetTranslationBehindReplicationFlow()
   Build / JDK 20 and Scala 2.13 / 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest.testReplication()
   Build / JDK 20 and Scala 2.13 / 
org.apache.kafka.controller.QuorumControllerTest.testBalancePartitionLeaders()
   Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.connect.mirror.integration.IdentityReplicationIntegrationTest.testReplicateSourceDefault()
   Build / JDK 17 and Scala 2.13 / 
integration.kafka.server.FetchFromFollowerIntegrationTest.testRackAwareRangeAssignor()
   Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.controller.QuorumControllerTest.testBalancePartitionLeaders()
   Build / JDK 11 and Scala 2.13 / 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOffsetTranslationBehindReplicationFlow()
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15200) verify pre-requisite at start of release.py

2023-07-20 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15200:
-
Description: 
At the start of release.py, the first thing it should do is verify that 
dependency pre-requisites are satisfied. The pre-requisites are:
 # maven should be installed.
 # sftp should be installed. Connection to @home.apache.org should be 
successful. Currently it is done manually at the step "Verify by using 
`{{{}sftp @home.apache.org{}}}`" in 
[https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]
 # svn should be installed

  was:
At the start of release.py, the first thing it should do is verify that 
dependency pre-requisites are satisfied. The pre-requisites are:

1. maven should be installed.
2. sftp should be installed. Connection to @home.apache.org should be 
successful. Currently it is done manually at the step "Verify by using 
`{{{}sftp @home.apache.org{}}}`" in 
[https://cwiki.apache.org/confluence/display/KAFKA/Release+Process] 


> verify pre-requisite at start of release.py
> ---
>
> Key: KAFKA-15200
> URL: https://issues.apache.org/jira/browse/KAFKA-15200
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Priority: Major
>
> At the start of release.py, the first thing it should do is verify that 
> dependency pre-requisites are satisfied. The pre-requisites are:
>  # maven should be installed.
>  # sftp should be installed. Connection to @home.apache.org should be 
> successful. Currently it is done manually at the step "Verify by using 
> `{{{}sftp @home.apache.org{}}}`" in 
> [https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]
>  # svn should be installed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15224) Automate version change to snapshot

2023-07-20 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15224:
-
Description: 
We require changing to SNAPSHOT version as part of the release process [1]. The 
specific manual steps are:

Update version on the branch to 0.10.0.1-SNAPSHOT in the following places:
 * 
 ** docs/js/templateData.js
 ** gradle.properties
 ** kafka-merge-pr.py
 ** streams/quickstart/java/pom.xml
 ** streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
 ** streams/quickstart/pom.xml
 ** tests/kafkatest/_{_}init{_}_.py (note: this version name can't follow the 
-SNAPSHOT convention due to python version naming restrictions, instead
update it to 0.10.0.1.dev0)
 ** tests/kafkatest/version.py

The diff of the changes look like 
[https://github.com/apache/kafka/commit/484a86feb562f645bdbec74b18f8a28395a686f7#diff-21a0ab11b8bbdab9930ad18d4bca2d943bbdf40d29d68ab8a96f765bd1f9]
 

 

It would be nice if we could run a script to automatically do it. Note that 
release.py (line 550) already does something similar where it replaces SNAPSHOT 
with actual version. We need to do the opposite here. We can repurpose that 
code in release.py and extract into a new script to perform this opertaion.

[1] [https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]

  was:
We require changing to SNAPSHOT version as part of the release process [1]. The 
specific manual steps are:

Update version on the branch to 0.10.0.1-SNAPSHOT in the following places:
 * 
 ** docs/js/templateData.js
 ** gradle.properties
 ** kafka-merge-pr.py
 ** streams/quickstart/java/pom.xml
 ** streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
 ** streams/quickstart/pom.xml
 ** tests/kafkatest/__init__.py (note: this version name can't follow the 
-SNAPSHOT convention due to python version naming restrictions, instead
update it to 0.10.0.1.dev0)
 ** tests/kafkatest/version.py

It would be nice if we could run a script to automatically do it. Note that 
release.py (line 550) already does something similar where it replaces SNAPSHOT 
with actual version. We need to do the opposite here. We can repurpose that 
code in release.py and extract into a new script to perform this opertaion.

[1] https://cwiki.apache.org/confluence/display/KAFKA/Release+Process


> Automate version change to snapshot 
> 
>
> Key: KAFKA-15224
> URL: https://issues.apache.org/jira/browse/KAFKA-15224
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Priority: Minor
>
> We require changing to SNAPSHOT version as part of the release process [1]. 
> The specific manual steps are:
> Update version on the branch to 0.10.0.1-SNAPSHOT in the following places:
>  * 
>  ** docs/js/templateData.js
>  ** gradle.properties
>  ** kafka-merge-pr.py
>  ** streams/quickstart/java/pom.xml
>  ** streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
>  ** streams/quickstart/pom.xml
>  ** tests/kafkatest/_{_}init{_}_.py (note: this version name can't follow the 
> -SNAPSHOT convention due to python version naming restrictions, instead
> update it to 0.10.0.1.dev0)
>  ** tests/kafkatest/version.py
> The diff of the changes look like 
> [https://github.com/apache/kafka/commit/484a86feb562f645bdbec74b18f8a28395a686f7#diff-21a0ab11b8bbdab9930ad18d4bca2d943bbdf40d29d68ab8a96f765bd1f9]
>  
>  
> It would be nice if we could run a script to automatically do it. Note that 
> release.py (line 550) already does something similar where it replaces 
> SNAPSHOT with actual version. We need to do the opposite here. We can 
> repurpose that code in release.py and extract into a new script to perform 
> this opertaion.
> [1] [https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15224) Automate version change to snapshot

2023-07-20 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-15224:


 Summary: Automate version change to snapshot 
 Key: KAFKA-15224
 URL: https://issues.apache.org/jira/browse/KAFKA-15224
 Project: Kafka
  Issue Type: Sub-task
Reporter: Divij Vaidya


We require changing to SNAPSHOT version as part of the release process [1]. The 
specific manual steps are:

Update version on the branch to 0.10.0.1-SNAPSHOT in the following places:
 * 
 ** docs/js/templateData.js
 ** gradle.properties
 ** kafka-merge-pr.py
 ** streams/quickstart/java/pom.xml
 ** streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
 ** streams/quickstart/pom.xml
 ** tests/kafkatest/__init__.py (note: this version name can't follow the 
-SNAPSHOT convention due to python version naming restrictions, instead
update it to 0.10.0.1.dev0)
 ** tests/kafkatest/version.py

It would be nice if we could run a script to automatically do it. Note that 
release.py (line 550) already does something similar where it replaces SNAPSHOT 
with actual version. We need to do the opposite here. We can repurpose that 
code in release.py and extract into a new script to perform this opertaion.

[1] https://cwiki.apache.org/confluence/display/KAFKA/Release+Process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] Owen-CH-Leung commented on a diff in pull request #14057: KAFKA-15194-Prepend-Offset-as-Filename

2023-07-20 Thread via GitHub


Owen-CH-Leung commented on code in PR #14057:
URL: https://github.com/apache/kafka/pull/14057#discussion_r1269422337


##
storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorageTest.java:
##
@@ -399,20 +403,21 @@ public Verifier(final LocalTieredStorage remoteStorage, 
final TopicIdPartition t
 this.topicIdPartition = requireNonNull(topicIdPartition);
 }
 
-private List expectedPaths(final RemoteLogSegmentId id) {
+private List expectedPaths(final RemoteLogSegmentMetadata 
metadata) {
 final String rootPath = getStorageRootDirectory();
 TopicPartition tp = topicIdPartition.topicPartition();
 final String topicPartitionSubpath = format("%s-%d-%s", 
tp.topic(), tp.partition(),
 topicIdPartition.topicId());
-final String uuid = id.id().toString();
+final String uuid = metadata.remoteLogSegmentId().id().toString();
+final long startOffset = metadata.startOffset();
 
 return Arrays.asList(
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.LOG_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TIME_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TXN_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LEADER_EPOCH_CHECKPOINT.getSuffix()),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.PRODUCER_SNAPSHOT_FILE_SUFFIX)
+Paths.get(rootPath, topicPartitionSubpath, startOffset + 
"-" + uuid + LogFileUtils.LOG_FILE_SUFFIX),

Review Comment:
   @divijvaidya Thanks for your feedback. I think the actual log file was named 
as [offset.filetype]. Looking at the implementation of 
`LogFileUtils#logFile(File dir, long offset)`, I don't think it will allow us 
to insert a uuid in the middle as part of the filename. 
   
   If we are to keep the `[offset-uuid.filetype]` pattern, instead of using 
`LogFileUtils#logFile(File dir, long offset)`, maybe we should make 
`LogFileUtils#filenamePrefixFromOffset(long offset)` as a public method so that 
we can construct a real offset using this method. What do you think ? 
   
   FYI, the method to create these offloaded files is 
`RemoteLogSegmentFileset#openFileset(final File storageDir, final 
RemoteLogSegmentId id)` . Currently my PR has changed this method to accept 
`RemoteLogSegmentMetadata` instead of `RemoteLogSegmentId` , get offset from 
metadata, and prepend it to the filename. (So yes, it's not close to the actual 
log file implementation, as the offset was just "0" without formatting, instead 
of "000")



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Owen-CH-Leung commented on a diff in pull request #14057: KAFKA-15194-Prepend-Offset-as-Filename

2023-07-20 Thread via GitHub


Owen-CH-Leung commented on code in PR #14057:
URL: https://github.com/apache/kafka/pull/14057#discussion_r1269422337


##
storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorageTest.java:
##
@@ -399,20 +403,21 @@ public Verifier(final LocalTieredStorage remoteStorage, 
final TopicIdPartition t
 this.topicIdPartition = requireNonNull(topicIdPartition);
 }
 
-private List expectedPaths(final RemoteLogSegmentId id) {
+private List expectedPaths(final RemoteLogSegmentMetadata 
metadata) {
 final String rootPath = getStorageRootDirectory();
 TopicPartition tp = topicIdPartition.topicPartition();
 final String topicPartitionSubpath = format("%s-%d-%s", 
tp.topic(), tp.partition(),
 topicIdPartition.topicId());
-final String uuid = id.id().toString();
+final String uuid = metadata.remoteLogSegmentId().id().toString();
+final long startOffset = metadata.startOffset();
 
 return Arrays.asList(
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.LOG_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TIME_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.TXN_INDEX_FILE_SUFFIX),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LEADER_EPOCH_CHECKPOINT.getSuffix()),
-Paths.get(rootPath, topicPartitionSubpath, uuid + 
LogFileUtils.PRODUCER_SNAPSHOT_FILE_SUFFIX)
+Paths.get(rootPath, topicPartitionSubpath, startOffset + 
"-" + uuid + LogFileUtils.LOG_FILE_SUFFIX),

Review Comment:
   @divijvaidya Thanks for your feedback. I think the actual log file was 
actually named as [offset].log. Looking at the implementation of 
`LogFileUtils#logFile(File dir, long offset)`, I don't think it will allow us 
to insert a uuid in the middle as part of the filename. 
   
   If we are to keep the `[offset-uuid.filetype]` pattern, instead of using 
`LogFileUtils#logFile(File dir, long offset)`, maybe we should make 
`LogFileUtils#filenamePrefixFromOffset(long offset)` as a public method so that 
we can construct a real offset using this method. What do you think ? 
   
   FYI, the method to create these offloaded files is 
`RemoteLogSegmentFileset#openFileset(final File storageDir, final 
RemoteLogSegmentId id)` . Currently my PR has changed this method to accept 
`RemoteLogSegmentMetadata` instead of `RemoteLogSegmentId` , get offset from 
metadata, and prepend it to the filename. (So yes, it's not close to the actual 
log file implementation, as the offset was just "0" without formatting, instead 
of "000")



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] bmscomp commented on pull request #14060: KAFKA-15222: Upgrade zinc Scala incremental compiler plugin version to a latests stable fit version (1.9.2)

2023-07-20 Thread via GitHub


bmscomp commented on PR #14060:
URL: https://github.com/apache/kafka/pull/14060#issuecomment-1643853720

   The current pull request CI, the errors related to zinc appeared again in 
current Jenkins build, rebasing the branch again will run the build again, the 
strange behaviour is that the related locked zinc file is pointing to an old 
version  of it 
   
   I am checking now the behaviour of the build, wait and see 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14017: KAFKA-14500; [6/6] Implement SyncGroup protocol in new GroupCoordinator

2023-07-20 Thread via GitHub


dajac commented on code in PR #14017:
URL: https://github.com/apache/kafka/pull/14017#discussion_r1269401629


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupMetadataManager.java:
##
@@ -2643,9 +2652,175 @@ private CoordinatorResult 
updateStaticMemberAndRebalance(
 group.stateAsString() + " when the unknown static member " + 
request.groupInstanceId() + " rejoins.");
 
 }
+return maybeCompleteJoinPhase(group);
+}
+
+public CoordinatorResult genericGroupSync(
+RequestContext context,
+SyncGroupRequestData request,
+CompletableFuture responseFuture
+) throws UnknownMemberIdException, GroupIdNotFoundException {
+String groupId = request.groupId();
+String memberId = request.memberId();
+GenericGroup group = getOrMaybeCreateGenericGroup(groupId, false);
+Optional errorOpt = validateSyncGroup(group, request);
+if (errorOpt.isPresent()) {
+responseFuture.complete(new SyncGroupResponseData()
+.setErrorCode(errorOpt.get().code()));
+
+} else if (group.isInState(EMPTY)) {
+responseFuture.complete(new SyncGroupResponseData()
+.setErrorCode(Errors.UNKNOWN_MEMBER_ID.code()));
+
+} else if (group.isInState(PREPARING_REBALANCE)) {
+responseFuture.complete(new SyncGroupResponseData()
+.setErrorCode(Errors.REBALANCE_IN_PROGRESS.code()));
+
+} else if (group.isInState(COMPLETING_REBALANCE)) {
+group.member(memberId).setAwaitingSyncFuture(responseFuture);
+removePendingSyncMember(group, request.memberId());
+
+// If this is the leader, then we can attempt to persist state and 
transition to stable
+if (group.isLeader(memberId)) {
+log.info("Assignment received from leader {} for group {} for 
generation {}. " +
+"The group has {} members, {} of which are static.",
+memberId, groupId, group.generationId(),
+group.size(), group.allStaticMemberIds().size());
+
+// Fill all members with corresponding assignment. Reset 
members not specified in
+// the assignment to empty assignments.
+Map assignments = new HashMap<>();
+request.assignments()
+.forEach(assignment -> 
assignments.put(assignment.memberId(), assignment.assignment()));
+
+Set membersWithMissingAssignment = new HashSet<>();
+group.allMembers().forEach(member -> {
+byte[] assignment = assignments.get(member.memberId());
+if (assignment != null) {
+member.setAssignment(assignment);
+} else {
+membersWithMissingAssignment.add(member.memberId());
+member.setAssignment(new byte[0]);
+}
+});
+
+if (!membersWithMissingAssignment.isEmpty()) {
+log.warn("Setting empty assignments for members {} of {} 
for generation {}.",
+membersWithMissingAssignment, groupId, 
group.generationId());
+}
+
+CompletableFuture appendFuture = new 
CompletableFuture<>();
+appendFuture.whenComplete((__, t) -> {
+// Another member may have joined the group while we were 
awaiting this callback,
+// so we must ensure we are still in the 
CompletingRebalance state and the same generation
+// when it gets invoked. if we have transitioned to 
another state, then do nothing
+if (group.isInState(COMPLETING_REBALANCE) && 
request.generationId() == group.generationId()) {
+if (t != null) {
+Errors error = Errors.forException(t);
+resetAndPropagateAssignmentWithError(group, error);
+maybePrepareRebalanceOrCompleteJoin(group, "Error 
" + error + " when storing group assignment" +
+"during SyncGroup (member: " + memberId + 
").");
+} else {
+// Members' assignments were already updated. 
Propagate and transition to Stable.
+propagateAssignment(group, Errors.NONE);
+group.transitionTo(STABLE);
+}
+}
+});
+
+List records = Collections.singletonList(
+RecordHelpers.newGroupMetadataRecord(group, 
metadataImage.features().metadataVersion())
+);
+return new CoordinatorResult<>(records, appendFuture);
+}
+
+} else if (group.isInState(STABLE)) {
+

[GitHub] [kafka] dajac commented on a diff in pull request #14017: KAFKA-14500; [6/6] Implement SyncGroup protocol in new GroupCoordinator

2023-07-20 Thread via GitHub


dajac commented on code in PR #14017:
URL: https://github.com/apache/kafka/pull/14017#discussion_r1269398879


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupMetadataManager.java:
##
@@ -2643,9 +2652,175 @@ private CoordinatorResult 
updateStaticMemberAndRebalance(
 group.stateAsString() + " when the unknown static member " + 
request.groupInstanceId() + " rejoins.");
 
 }
+return maybeCompleteJoinPhase(group);
+}
+
+public CoordinatorResult genericGroupSync(
+RequestContext context,
+SyncGroupRequestData request,
+CompletableFuture responseFuture
+) throws UnknownMemberIdException, GroupIdNotFoundException {
+String groupId = request.groupId();
+String memberId = request.memberId();
+GenericGroup group = getOrMaybeCreateGenericGroup(groupId, false);
+Optional errorOpt = validateSyncGroup(group, request);
+if (errorOpt.isPresent()) {
+responseFuture.complete(new SyncGroupResponseData()
+.setErrorCode(errorOpt.get().code()));
+
+} else if (group.isInState(EMPTY)) {
+responseFuture.complete(new SyncGroupResponseData()
+.setErrorCode(Errors.UNKNOWN_MEMBER_ID.code()));
+
+} else if (group.isInState(PREPARING_REBALANCE)) {
+responseFuture.complete(new SyncGroupResponseData()
+.setErrorCode(Errors.REBALANCE_IN_PROGRESS.code()));
+
+} else if (group.isInState(COMPLETING_REBALANCE)) {
+group.member(memberId).setAwaitingSyncFuture(responseFuture);
+removePendingSyncMember(group, request.memberId());
+
+// If this is the leader, then we can attempt to persist state and 
transition to stable
+if (group.isLeader(memberId)) {
+log.info("Assignment received from leader {} for group {} for 
generation {}. " +
+"The group has {} members, {} of which are static.",
+memberId, groupId, group.generationId(),
+group.size(), group.allStaticMemberIds().size());
+
+// Fill all members with corresponding assignment. Reset 
members not specified in
+// the assignment to empty assignments.
+Map assignments = new HashMap<>();
+request.assignments()
+.forEach(assignment -> 
assignments.put(assignment.memberId(), assignment.assignment()));
+
+Set membersWithMissingAssignment = new HashSet<>();
+group.allMembers().forEach(member -> {
+byte[] assignment = assignments.get(member.memberId());
+if (assignment != null) {
+member.setAssignment(assignment);
+} else {
+membersWithMissingAssignment.add(member.memberId());
+member.setAssignment(new byte[0]);
+}
+});

Review Comment:
   I am not sure. I lean towards keeping the implementation as it was to avoid 
any unwanted side effects.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >