[GitHub] [kafka] jsancio commented on a diff in pull request #12642: KAFKA-14207; KRaft Operations documentation

2022-09-26 Thread GitBox


jsancio commented on code in PR #12642:
URL: https://github.com/apache/kafka/pull/12642#discussion_r980343819


##
config/kraft/README.md:
##
@@ -110,19 +105,15 @@ This is particularly important for the metadata log 
maintained by the controller
 nothing in the log, which would cause all metadata to be lost.
 
 # Missing Features
-We don't support any kind of upgrade right now, either to or from KRaft mode.  
This is an important gap that we are working on.
 
-Finally, the following Kafka features have not yet been fully implemented:
+The following features have not yet been fully implemented:
 
 * Configuring SCRAM users via the administrative API
 * Supporting JBOD configurations with multiple storage directories
 * Modifying certain dynamic configurations on the standalone KRaft controller
-* Support for some configurations, like enabling unclean leader election by 
default or dynamically changing broker endpoints
 * Delegation tokens
 * Upgrade from ZooKeeper mode

Review Comment:
   I think we should delete this file now that we have moved most of this 
information to ops.html. I can do that in a future PR. Didn't want to have this 
discussion in this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jsancio commented on a diff in pull request #12642: KAFKA-14207; KRaft Operations documentation

2022-09-26 Thread GitBox


jsancio commented on code in PR #12642:
URL: https://github.com/apache/kafka/pull/12642#discussion_r980338126


##
docs/ops.html:
##
@@ -3180,6 +3180,119 @@ 6.10 KRaft
+
+  Configuration
+
+  Process Roles
+
+  In KRaft mode each Kafka server can be configured as a controller, as a 
broker or as both using the process.roles property. This property 
can have the following values:
+
+  
+If process.roles is set to broker, the 
server acts as a broker.
+If process.roles is set to controller, the 
server acts as a controller.
+If process.roles is set to 
broker,controller, the server acts as a broker and a 
controller.
+If process.roles is not set at all, it is assumed to be 
in ZooKeeper mode.
+  
+
+  Nodes that act as both brokers and controllers are referred to as 
"combined" nodes. Combined nodes are simpler to operate for simple use cases 
like a development environment. The key disadvantage is that the controller 
will be less isolated from the rest of the system. Combined mode is not 
recommended is critical deployment environments.
+
+
+  Controllers
+
+  In KRaft mode, only a small group of specially selected servers can act 
as controllers (unlike the ZooKeeper-based mode, where any server can become 
the Controller). The specially selected controller servers will participate in 
the metadata quorum. Each controller server is either active, or a hot standby 
for the current active controller server.
+
+  A Kafka cluster will typically select 3 or 5 servers for this role, 
depending on factors like cost and the number of concurrent failures your 
system should withstand without availability impact. A majority of the 
controllers must be alive in order to maintain availability. With 3 
controllers, the cluster can tolerate 1 controller failure; with 5 controllers, 
the cluster can tolerate 2 controller failures.
+
+  All of the servers in a Kafka cluster discover the quorum voters using 
the controller.quorum.voters property. This identifies the quorum 
controller servers that should be used. All the controllers must be enumerated. 
Each controller is identified with their id, host and 
port information. This is an example configuration: 
controller.quorum.voters=id1@host1:port1,id2@host2:port2,id3@host3:port3
+
+  If the Kafka cluster has 3 controllers named controller1, controller2 and 
controller3 then controller3 may have the following:
+
+  
+process.roles=controller
+node.id=1
+listeners=CONTROLLER://controller1.example.com:9093
+controller.quorum.voters=1...@controller1.example.com:9093,2...@controller2.example.com:9093,3...@controller3.example.com:9093
+
+  Every broker and controller must set the 
controller.quorum.voters property. The node ID supplied in the 
controller.quorum.voters property must match the corresponding id 
on the controller servers. For example, on controller1, node.id must be set to 
1, and so forth. Each node ID must be unique across all the nodes in a 
particular cluster. No two nodes can have the same node ID regardless of their 
process.roles values.
+
+  Storage Tool
+  
+  The kafka-storage.sh random-uuid command can be used to 
generate a cluster ID for your new cluster. This cluster ID must be used when 
formatting each node in the cluster with the kafka-storage.sh 
format command.
+
+  This is different from how Kafka has operated in the past. Previously, 
Kafka would format blank storage directories automatically, and also generate a 
new cluster ID automatically. One reason for the change is that auto-formatting 
can sometimes obscure an error condition. This is particularly important for 
the metadata log maintained by the controller and broker servers. If a majority 
of the controllers were able to start with an empty log directory, a leader 
might be able to be elected with missing committed data.
+
+  Debugging
+
+  Metadata Quorum 
Tool
+
+  The kafka-metadata-quorum tool can be used to describe the 
runtime state of the cluster metadata partition. For example, the following 
command display a summary of the metadata quorum:
+
+> 
bin/kafka-metadata-quorum.sh --bootstrap-server  broker_host:port describe 
--status
+ClusterId:  fMCL8kv1SWm87L_Md-I2hg
+LeaderId:   3002
+LeaderEpoch:2
+HighWatermark:  10
+MaxFollowerLag: 0
+MaxFollowerLagTimeMs:   -1
+CurrentVoters:  [3000,3001,3002]
+CurrentObservers:   [0,1,2]
+
+  Dump Log Tool
+
+  The kafka-dump-log tool can be used to debug the log 
segments and snapshots for the cluster metadata directory. The tool will scan 
the provided files and decode the metadata records. For example, this command 
decodes and prints the records in the first log segment:
+
+> 
bin/kafka-dump-log.sh --cluster-metadata-decoder --skip-record-metadat --files 
metadata_log_dir/__cluster_metadata-0/.log
+
+  This command decodes and prints the recrods in the a cluster metadata 
snapshot:
+
+> 
bin/kafka-dump-

[GitHub] [kafka] jsancio commented on a diff in pull request #12642: KAFKA-14207; KRaft Operations documentation

2022-09-26 Thread GitBox


jsancio commented on code in PR #12642:
URL: https://github.com/apache/kafka/pull/12642#discussion_r980338126


##
docs/ops.html:
##
@@ -3180,6 +3180,119 @@ 6.10 KRaft
+
+  Configuration
+
+  Process Roles
+
+  In KRaft mode each Kafka server can be configured as a controller, as a 
broker or as both using the process.roles property. This property 
can have the following values:
+
+  
+If process.roles is set to broker, the 
server acts as a broker.
+If process.roles is set to controller, the 
server acts as a controller.
+If process.roles is set to 
broker,controller, the server acts as a broker and a 
controller.
+If process.roles is not set at all, it is assumed to be 
in ZooKeeper mode.
+  
+
+  Nodes that act as both brokers and controllers are referred to as 
"combined" nodes. Combined nodes are simpler to operate for simple use cases 
like a development environment. The key disadvantage is that the controller 
will be less isolated from the rest of the system. Combined mode is not 
recommended is critical deployment environments.
+
+
+  Controllers
+
+  In KRaft mode, only a small group of specially selected servers can act 
as controllers (unlike the ZooKeeper-based mode, where any server can become 
the Controller). The specially selected controller servers will participate in 
the metadata quorum. Each controller server is either active, or a hot standby 
for the current active controller server.
+
+  A Kafka cluster will typically select 3 or 5 servers for this role, 
depending on factors like cost and the number of concurrent failures your 
system should withstand without availability impact. A majority of the 
controllers must be alive in order to maintain availability. With 3 
controllers, the cluster can tolerate 1 controller failure; with 5 controllers, 
the cluster can tolerate 2 controller failures.
+
+  All of the servers in a Kafka cluster discover the quorum voters using 
the controller.quorum.voters property. This identifies the quorum 
controller servers that should be used. All the controllers must be enumerated. 
Each controller is identified with their id, host and 
port information. This is an example configuration: 
controller.quorum.voters=id1@host1:port1,id2@host2:port2,id3@host3:port3
+
+  If the Kafka cluster has 3 controllers named controller1, controller2 and 
controller3 then controller3 may have the following:
+
+  
+process.roles=controller
+node.id=1
+listeners=CONTROLLER://controller1.example.com:9093
+controller.quorum.voters=1...@controller1.example.com:9093,2...@controller2.example.com:9093,3...@controller3.example.com:9093
+
+  Every broker and controller must set the 
controller.quorum.voters property. The node ID supplied in the 
controller.quorum.voters property must match the corresponding id 
on the controller servers. For example, on controller1, node.id must be set to 
1, and so forth. Each node ID must be unique across all the nodes in a 
particular cluster. No two nodes can have the same node ID regardless of their 
process.roles values.
+
+  Storage Tool
+  
+  The kafka-storage.sh random-uuid command can be used to 
generate a cluster ID for your new cluster. This cluster ID must be used when 
formatting each node in the cluster with the kafka-storage.sh 
format command.
+
+  This is different from how Kafka has operated in the past. Previously, 
Kafka would format blank storage directories automatically, and also generate a 
new cluster ID automatically. One reason for the change is that auto-formatting 
can sometimes obscure an error condition. This is particularly important for 
the metadata log maintained by the controller and broker servers. If a majority 
of the controllers were able to start with an empty log directory, a leader 
might be able to be elected with missing committed data.
+
+  Debugging
+
+  Metadata Quorum 
Tool
+
+  The kafka-metadata-quorum tool can be used to describe the 
runtime state of the cluster metadata partition. For example, the following 
command display a summary of the metadata quorum:
+
+> 
bin/kafka-metadata-quorum.sh --bootstrap-server  broker_host:port describe 
--status
+ClusterId:  fMCL8kv1SWm87L_Md-I2hg
+LeaderId:   3002
+LeaderEpoch:2
+HighWatermark:  10
+MaxFollowerLag: 0
+MaxFollowerLagTimeMs:   -1
+CurrentVoters:  [3000,3001,3002]
+CurrentObservers:   [0,1,2]
+
+  Dump Log Tool
+
+  The kafka-dump-log tool can be used to debug the log 
segments and snapshots for the cluster metadata directory. The tool will scan 
the provided files and decode the metadata records. For example, this command 
decodes and prints the records in the first log segment:
+
+> 
bin/kafka-dump-log.sh --cluster-metadata-decoder --skip-record-metadat --files 
metadata_log_dir/__cluster_metadata-0/.log
+
+  This command decodes and prints the recrods in the a cluster metadata 
snapshot:
+
+> 
bin/kafka-dump-