[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-04-03 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1156401794


##
core/src/test/java/kafka/testkit/KafkaClusterTestKit.java:
##
@@ -395,9 +395,7 @@ private void formatNodeAndLog(MetaProperties properties, 
String metadataLogDir,
 try (PrintStream out = new PrintStream(stream)) {
 StorageTool.formatCommand(out,
 
JavaConverters.asScalaBuffer(Collections.singletonList(metadataLogDir)).toSeq(),
-properties,
-MetadataVersion.MINIMUM_BOOTSTRAP_VERSION,
-false);
+properties, 
MetadataVersion.MINIMUM_BOOTSTRAP_VERSION, false);

Review Comment:
   do we have to do this whitespace change?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-04-03 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1156401424


##
core/src/main/scala/kafka/tools/StorageTool.scala:
##
@@ -235,9 +379,21 @@ object StorageTool extends Logging {
 metaProperties: MetaProperties,
 metadataVersion: MetadataVersion,
 ignoreFormatted: Boolean): Int = {
+val bootstrapMetadata = buildBootstrapMetadata(metadataVersion, None, 
"format command")
+formatCommand(stream, directories, metaProperties, bootstrapMetadata, 
metadataVersion, ignoreFormatted)
+  }
+
+
+  def formatCommand(stream: PrintStream,

Review Comment:
   can you use standard formatting here
   ```
   Foo(
 bar
 baz
 quux
   )
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-04-03 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1156400641


##
clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramMechanism.java:
##
@@ -40,11 +48,13 @@ public enum ScramMechanism {
 MECHANISMS_MAP = Collections.unmodifiableMap(map);
 }
 
-ScramMechanism(String hashAlgorithm, String macAlgorithm, int 
minIterations) {
+ScramMechanism(byte type, String hashAlgorithm, String macAlgorithm, int 
minIterations, int maxIterations) {

Review Comment:
   can you use standard indentation here
   ```
   Foo(
 bar
 baz
 quux
   ) {
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-03-27 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1149874897


##
core/src/main/scala/kafka/tools/StorageTool.scala:
##
@@ -22,19 +22,37 @@ import java.nio.file.{Files, Paths}
 import kafka.server.{BrokerMetadataCheckpoint, KafkaConfig, MetaProperties, 
RawMetaProperties}
 import kafka.utils.{Exit, Logging}
 import net.sourceforge.argparse4j.ArgumentParsers
-import net.sourceforge.argparse4j.impl.Arguments.{store, storeTrue}
+import net.sourceforge.argparse4j.impl.Arguments.{store, storeTrue, append}
 import net.sourceforge.argparse4j.inf.Namespace
 import org.apache.kafka.common.Uuid
 import org.apache.kafka.common.utils.Utils
 import org.apache.kafka.metadata.bootstrap.{BootstrapDirectory, 
BootstrapMetadata}
-import org.apache.kafka.server.common.MetadataVersion
+import org.apache.kafka.server.common.{ApiMessageAndVersion, MetadataVersion}
+import org.apache.kafka.common.metadata.FeatureLevelRecord
+import org.apache.kafka.common.metadata.UserScramCredentialRecord
+import org.apache.kafka.common.security.scram.internals.ScramMechanism
+import org.apache.kafka.common.security.scram.internals.ScramFormatter
 
+
+import java.util
+import java.util.Base64
 import java.util.Optional
 import scala.collection.mutable
+import scala.jdk.CollectionConverters._
+import scala.collection.mutable.ArrayBuffer
 
 object StorageTool extends Logging {
   def main(args: Array[String]): Unit = {
 try {
+  main_internal(args)

Review Comment:
   Can we just change the `System.exit` to an `Exit.exit` in the catch block 
for `TerseFailureException` and be done with it? I don't like this "two main 
functions" thing. And `main_internal` isn't a name we would use (we don't use 
underscores in names)...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-03-27 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1149874897


##
core/src/main/scala/kafka/tools/StorageTool.scala:
##
@@ -22,19 +22,37 @@ import java.nio.file.{Files, Paths}
 import kafka.server.{BrokerMetadataCheckpoint, KafkaConfig, MetaProperties, 
RawMetaProperties}
 import kafka.utils.{Exit, Logging}
 import net.sourceforge.argparse4j.ArgumentParsers
-import net.sourceforge.argparse4j.impl.Arguments.{store, storeTrue}
+import net.sourceforge.argparse4j.impl.Arguments.{store, storeTrue, append}
 import net.sourceforge.argparse4j.inf.Namespace
 import org.apache.kafka.common.Uuid
 import org.apache.kafka.common.utils.Utils
 import org.apache.kafka.metadata.bootstrap.{BootstrapDirectory, 
BootstrapMetadata}
-import org.apache.kafka.server.common.MetadataVersion
+import org.apache.kafka.server.common.{ApiMessageAndVersion, MetadataVersion}
+import org.apache.kafka.common.metadata.FeatureLevelRecord
+import org.apache.kafka.common.metadata.UserScramCredentialRecord
+import org.apache.kafka.common.security.scram.internals.ScramMechanism
+import org.apache.kafka.common.security.scram.internals.ScramFormatter
 
+
+import java.util
+import java.util.Base64
 import java.util.Optional
 import scala.collection.mutable
+import scala.jdk.CollectionConverters._
+import scala.collection.mutable.ArrayBuffer
 
 object StorageTool extends Logging {
   def main(args: Array[String]): Unit = {
 try {
+  main_internal(args)

Review Comment:
   We don't use `snake_case`, we use `camelCase`.
   
   Also, it seems messy to have calls to `Exit.exit` both inside the "internal" 
function and outside it. It makes you wonder why we're bothering with the 
internal function.
   
   So what I'd sugest is:
   1. rename `main_internal` to `run` or something like that
   2. have `run` return the integer status code we should pass to `Exit.exit()`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-03-27 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1149876669


##
core/src/main/scala/kafka/tools/StorageTool.scala:
##
@@ -128,6 +152,108 @@ object StorageTool extends Logging {
   .getOrElse(defaultValue)
   }
 
+  def getUserScramCredentialRecord(mechanism: String,

Review Comment:
   There are a lot of contributors who are rewriting tools in Java right now. 
You can take a look at the `git log` of the last month or two:
   
   KAFKA-14578: Move ConsumerPerformance to tools (#13215)
   KAFKA-14590: Move DelegationTokenCommand to tools (#13172)
   KAFKA-14582: Move JmxTool to tools (#13136)
   KAFKA-14575: Move ClusterTool to tools module (#13080)
   MINOR: Move MetadataQuorumCommand from `core` to `tools` (#12951)
   ... etc ...
   
   These rewrites aren't a "maybe someday" kind of thing; they are happening 
right now.
   
   Anyway, like I said earlier, we don't have to do that in this PR :) None of 
this will be hard to port to Java when the time comes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-03-27 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1149874897


##
core/src/main/scala/kafka/tools/StorageTool.scala:
##
@@ -22,19 +22,37 @@ import java.nio.file.{Files, Paths}
 import kafka.server.{BrokerMetadataCheckpoint, KafkaConfig, MetaProperties, 
RawMetaProperties}
 import kafka.utils.{Exit, Logging}
 import net.sourceforge.argparse4j.ArgumentParsers
-import net.sourceforge.argparse4j.impl.Arguments.{store, storeTrue}
+import net.sourceforge.argparse4j.impl.Arguments.{store, storeTrue, append}
 import net.sourceforge.argparse4j.inf.Namespace
 import org.apache.kafka.common.Uuid
 import org.apache.kafka.common.utils.Utils
 import org.apache.kafka.metadata.bootstrap.{BootstrapDirectory, 
BootstrapMetadata}
-import org.apache.kafka.server.common.MetadataVersion
+import org.apache.kafka.server.common.{ApiMessageAndVersion, MetadataVersion}
+import org.apache.kafka.common.metadata.FeatureLevelRecord
+import org.apache.kafka.common.metadata.UserScramCredentialRecord
+import org.apache.kafka.common.security.scram.internals.ScramMechanism
+import org.apache.kafka.common.security.scram.internals.ScramFormatter
 
+
+import java.util
+import java.util.Base64
 import java.util.Optional
 import scala.collection.mutable
+import scala.jdk.CollectionConverters._
+import scala.collection.mutable.ArrayBuffer
 
 object StorageTool extends Logging {
   def main(args: Array[String]): Unit = {
 try {
+  main_internal(args)

Review Comment:
   We don't use `snake_case`, we use `camelCase`.
   
   Also, it seems messy to have calls to `Exit.exit` both inside the "internal" 
function and outside it. It makes you wonder why we're bothering with the 
internal function.
   
   So what I'd sugest is:
   1. rename `main_internal` to `run` or something like that
   2. have `run` return the status code we should pass to `Exit.exit()`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-03-23 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1146574776


##
core/src/test/scala/unit/kafka/tools/StorageToolTest.scala:
##
@@ -221,4 +224,78 @@ Found problem:
 
 assertThrows(classOf[IllegalArgumentException], () => 
parseMetadataVersion("--release-version", "0.0"))
   }
+
+  @Test
+  def testAddScram():Unit = {
+def parseAddScram(strings: String*): 
Option[ArrayBuffer[UserScramCredentialRecord]] = {
+  var args = mutable.Seq("format", "-c", "config.props", "-t", 
"XcZZOzUqS4yHOjhMQB6JLQ")
+  args ++= strings
+  val namespace = StorageTool.parseArguments(args.toArray)
+  StorageTool.getUserScramCredentialRecords(namespace)
+}
+
+var scramRecords = parseAddScram()
+assertEquals(None, scramRecords)
+
+// Validate we can add multiple SCRAM creds.
+scramRecords = parseAddScram("-S",
+
"SCRAM-SHA-256=[name=alice,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",saltedpassword=\"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=\",iterations=8192]",
+"-S",
+
"SCRAM-SHA-256=[name=george,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",saltedpassword=\"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=\",iterations=8192]")
+
+assertEquals(2, scramRecords.get.size)
+
+// Require name subfield.
+try assertEquals(1, parseAddScram("-S", 
+  
"SCRAM-SHA-256=[salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",saltedpassword=\"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=\",iterations=8192]"))
 catch {
+  case e: TerseFailure => assertEquals(s"You must supply 'name' to 
add-scram", e.getMessage)
+}
+
+// Require password xor saltedpassword
+try assertEquals(1, parseAddScram("-S", 
+  
"SCRAM-SHA-256=[name=alice,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",password=alice,saltedpassword=\"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=\",iterations=8192]"))
+catch {
+  case e: TerseFailure => assertEquals(s"You must only supply one of 
'password' or 'saltedpassword' to add-scram", e.getMessage)
+}
+
+try assertEquals(1, parseAddScram("-S", 
+  
"SCRAM-SHA-256=[name=alice,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",iterations=8192]"))
+catch {
+  case e: TerseFailure => assertEquals(s"You must supply one of 'password' 
or 'saltedpassword' to add-scram", e.getMessage)
+}
+
+// Validate salt is required with saltedpassword
+try assertEquals(1, parseAddScram("-S", 
+  
"SCRAM-SHA-256=[name=alice,saltedpassword=\"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=\",iterations=8192]"))
+catch {
+  case e: TerseFailure => assertEquals(s"You must supply 'salt' with 
'saltedpassword' to add-scram", e.getMessage)
+}
+
+// Validate salt is optional with password
+assertEquals(1, parseAddScram("-S", 
"SCRAM-SHA-256=[name=alice,password=alice,iterations=4096]").get.size)
+
+// Require 4096 <= iterations <= 16384
+try assertEquals(1, parseAddScram("-S", 
+  
"SCRAM-SHA-256=[name=alice,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",password=alice,iterations=16385]"))
+catch {
+  case e: TerseFailure => assertEquals(s"The 'iterations' value must be <= 
16384 for add-scram", e.getMessage)
+}
+
+assertEquals(1, parseAddScram("-S",
+  
"SCRAM-SHA-256=[name=alice,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",password=alice,iterations=16384]")
+  .get.size)
+
+try assertEquals(1, parseAddScram("-S", 
+  
"SCRAM-SHA-256=[name=alice,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",password=alice,iterations=4095]"))
+catch {
+  case e: TerseFailure => assertEquals(s"The 'iterations' value must be >= 
4096 for add-scram", e.getMessage)
+}
+
+assertEquals(1, parseAddScram("-S",
+  
"SCRAM-SHA-256=[name=alice,salt=\"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=\",password=alice,iterations=4096]")
+  .get.size)
+
+// Validate iterations is optional
+assertEquals(1, parseAddScram("-S", 
"SCRAM-SHA-256=[name=alice,password=alice]") .get.size)

Review Comment:
   ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cmccabe commented on a diff in pull request #13374: KAFKA-14765 and KAFKA-14776: Support for SCRAM at bootstrap with integration tests

2023-03-23 Thread via GitHub


cmccabe commented on code in PR #13374:
URL: https://github.com/apache/kafka/pull/13374#discussion_r1146573531


##
core/src/main/scala/kafka/tools/StorageTool.scala:
##
@@ -128,6 +152,108 @@ object StorageTool extends Logging {
   .getOrElse(defaultValue)
   }
 
+  def getUserScramCredentialRecord(mechanism: String,

Review Comment:
   I still feel that this should be in java, but I guess we can cross that 
bridge when we get there. Lots of contributors seem to like doing the rewrites, 
maybe they can do one here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org