rondagostino commented on a change in pull request #10414:
URL: https://github.com/apache/kafka/pull/10414#discussion_r602270763



##########
File path: config/raft/README.md
##########
@@ -65,16 +68,19 @@ controllers, you can tolerate 1 failure; with 5 
controllers, you can tolerate 2
 ## Process Roles
 Each Kafka server now has a new configuration key called `process.roles` which 
can have the following values:
 
-* If `process.roles` is set to `broker`, the server acts as a self-managed 
broker.
-* If `process.roles` is set to `controller`, the server acts as a self-managed 
controller.
-* If `process.roles` is set to `broker,controller`, the server acts as both a 
self-managed broker and a self-managed controller.
-* If `process.roles` is not set at all then we are assumed to be in ZooKeeper 
mode.  As mentioned earlier, you can't currently transition back and forth 
between ZK mode and self-managed mode without reformatting.
+* If `process.roles` is set to `broker`, the server acts as a broker.
+* If `process.roles` is set to `controller`, the server acts as a controller.
+* If `process.roles` is set to `broker,controller`, the server acts as both a 
broker and a controller.
+* If `process.roles` is not set at all then we are assumed to be in ZooKeeper 
mode.  As mentioned earlier, you can't currently transition back and forth 
between ZK mode and Raft mode without reformatting.

Review comment:
       `s/ZK/ZooKeeper/` as was done below?

##########
File path: config/raft/README.md
##########
@@ -84,21 +90,26 @@ listeners=CONTROLLER://controller1.example.com:9093
 
controller.quorum.voters=1...@controller1.example.com:9093,2...@controller2.example.com:9093,3...@controller3.example.com:9093
 ```
 
-Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the controller.quorum.voters configuration must 
match that supplied to the server.  So on controller1, node.id must be set to 
1, and so forth.  Note that there is no requirement for controller IDs to start 
at 0 or 1.  However, the easiest and least confusing way to allocate node IDs 
is probably just to give each server a numeric ID, starting from 0.
+Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the `controller.quorum.voters` configuration must 
match that supplied to the server.
+So on controller1, node.id must be set to 1, and so forth.  Note that there is 
no requirement for controller IDs to start at 0 or 1.  However, the easiest and 
least confusing way to allocate
+node IDs is probably just to give each server a numeric ID, starting from 0.

Review comment:
       Just noting that I think these are just line break changes.

##########
File path: config/raft/README.md
##########
@@ -84,21 +90,26 @@ listeners=CONTROLLER://controller1.example.com:9093
 
controller.quorum.voters=1...@controller1.example.com:9093,2...@controller2.example.com:9093,3...@controller3.example.com:9093
 ```
 
-Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the controller.quorum.voters configuration must 
match that supplied to the server.  So on controller1, node.id must be set to 
1, and so forth.  Note that there is no requirement for controller IDs to start 
at 0 or 1.  However, the easiest and least confusing way to allocate node IDs 
is probably just to give each server a numeric ID, starting from 0.
+Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the `controller.quorum.voters` configuration must 
match that supplied to the server.
+So on controller1, node.id must be set to 1, and so forth.  Note that there is 
no requirement for controller IDs to start at 0 or 1.  However, the easiest and 
least confusing way to allocate
+node IDs is probably just to give each server a numeric ID, starting from 0.
 
 Note that clients never need to configure `controller.quorum.voters`; only 
servers do.
 
 ## Kafka Storage Tool
 As described above in the QuickStart section, you must use the 
`kafka-storage.sh` tool to generate a cluster ID for your new cluster, and then 
run the format command on each node before starting the node.
 
-This is different from how Kafka has operated in the past.  Previously, Kafka 
would format blank storage directories automatically, and also generate a new 
cluster UUID automatically.  One reason for the change is that auto-formatting 
can sometimes obscure an error condition.  For example, under UNIX, if a data 
directory can't be mounted, it may show up as blank.  In this case, 
auto-formatting would be the wrong thing to do.
+This is different from how Kafka has operated in the past.  Previously, Kafka 
would format blank storage directories automatically, and also generate a new 
cluster UUID automatically.  One reason for the change
+is that auto-formatting can sometimes obscure an error condition.  For 
example, under UNIX, if a data directory can't be mounted, it may show up as 
blank.  In this case, auto-formatting would be the wrong thing to do.

Review comment:
       just noting linebreak-only changes

##########
File path: config/raft/README.md
##########
@@ -84,21 +90,26 @@ listeners=CONTROLLER://controller1.example.com:9093
 
controller.quorum.voters=1...@controller1.example.com:9093,2...@controller2.example.com:9093,3...@controller3.example.com:9093
 ```
 
-Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the controller.quorum.voters configuration must 
match that supplied to the server.  So on controller1, node.id must be set to 
1, and so forth.  Note that there is no requirement for controller IDs to start 
at 0 or 1.  However, the easiest and least confusing way to allocate node IDs 
is probably just to give each server a numeric ID, starting from 0.
+Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the `controller.quorum.voters` configuration must 
match that supplied to the server.
+So on controller1, node.id must be set to 1, and so forth.  Note that there is 
no requirement for controller IDs to start at 0 or 1.  However, the easiest and 
least confusing way to allocate
+node IDs is probably just to give each server a numeric ID, starting from 0.
 
 Note that clients never need to configure `controller.quorum.voters`; only 
servers do.
 
 ## Kafka Storage Tool
 As described above in the QuickStart section, you must use the 
`kafka-storage.sh` tool to generate a cluster ID for your new cluster, and then 
run the format command on each node before starting the node.
 
-This is different from how Kafka has operated in the past.  Previously, Kafka 
would format blank storage directories automatically, and also generate a new 
cluster UUID automatically.  One reason for the change is that auto-formatting 
can sometimes obscure an error condition.  For example, under UNIX, if a data 
directory can't be mounted, it may show up as blank.  In this case, 
auto-formatting would be the wrong thing to do.
+This is different from how Kafka has operated in the past.  Previously, Kafka 
would format blank storage directories automatically, and also generate a new 
cluster UUID automatically.  One reason for the change
+is that auto-formatting can sometimes obscure an error condition.  For 
example, under UNIX, if a data directory can't be mounted, it may show up as 
blank.  In this case, auto-formatting would be the wrong thing to do.
 
-This is particularly important for the metadata log maintained by the 
controller servers.  If two controllers out of three controllers were able to 
start with blank logs, a leader might be able to be elected with nothing in the 
log, which would cause all metadata to be lost.
+This is particularly important for the metadata log maintained by the 
controller servers.  If two controllers out of three controllers were able to 
start with blank logs, a leader might be able to be elected with
+nothing in the log, which would cause all metadata to be lost.

Review comment:
       just noting linebreak-only changes

##########
File path: config/raft/README.md
##########
@@ -65,16 +68,19 @@ controllers, you can tolerate 1 failure; with 5 
controllers, you can tolerate 2
 ## Process Roles
 Each Kafka server now has a new configuration key called `process.roles` which 
can have the following values:
 
-* If `process.roles` is set to `broker`, the server acts as a self-managed 
broker.
-* If `process.roles` is set to `controller`, the server acts as a self-managed 
controller.
-* If `process.roles` is set to `broker,controller`, the server acts as both a 
self-managed broker and a self-managed controller.
-* If `process.roles` is not set at all then we are assumed to be in ZooKeeper 
mode.  As mentioned earlier, you can't currently transition back and forth 
between ZK mode and self-managed mode without reformatting.
+* If `process.roles` is set to `broker`, the server acts as a broker.
+* If `process.roles` is set to `controller`, the server acts as a controller.
+* If `process.roles` is set to `broker,controller`, the server acts as both a 
broker and a controller.
+* If `process.roles` is not set at all then we are assumed to be in ZooKeeper 
mode.  As mentioned earlier, you can't currently transition back and forth 
between ZK mode and Raft mode without reformatting.
 
 Nodes that act as both brokers and controllers are referred to as "combined" 
nodes.  Combined nodes are simpler to operate for simple use cases and allow 
you to avoid
-some of the fixed memory overheads associated with JVMs.  The key disadvantage 
is that the controller will be less isolated from the rest of the system.  For 
example, if activity on the broker causes an out of memory condition, the 
controller part of the server is not isolated from that OOM condition.
+some fixed memory overheads associated with JVMs.  The key disadvantage is 
that the controller will be less isolated from the rest of the system.  For 
example, if activity on the broker causes an out of
+memory condition, the controller part of the server is not isolated from that 
OOM condition.
 
 ## Quorum Voters
-All nodes in the system must set the `controller.quorum.voters` configuration. 
 This identifies the quorum controller servers that should be used.  All the 
controllers must be enumerated.  This is similar to how, when using ZooKeeper, 
the `zookeeper.connect` configuration must contain all the ZooKeeper servers.  
Unlike with the ZK config, however, `controller.quorum.voters` also has IDs for 
each node.  The format is id1@host1:port1,id2@host2:port2, etc.
+All nodes in the system must set the `controller.quorum.voters` configuration. 
 This identifies the quorum controller servers that should be used.  All the 
controllers must be enumerated.
+This is similar to how, when using ZooKeeper, the `zookeeper.connect` 
configuration must contain all the ZooKeeper servers.  Unlike with the 
ZooKeeper config, however, `controller.quorum.voters`
+also has IDs for each node.  The format is id1@host1:port1,id2@host2:port2, 
etc.

Review comment:
       Just noting that the change here was `s/ZK/ZooKeeper/` since the added 
line breaks make it difficult to see quickly what the change actually was.

##########
File path: config/raft/README.md
##########
@@ -84,21 +90,26 @@ listeners=CONTROLLER://controller1.example.com:9093
 
controller.quorum.voters=1...@controller1.example.com:9093,2...@controller2.example.com:9093,3...@controller3.example.com:9093
 ```
 
-Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the controller.quorum.voters configuration must 
match that supplied to the server.  So on controller1, node.id must be set to 
1, and so forth.  Note that there is no requirement for controller IDs to start 
at 0 or 1.  However, the easiest and least confusing way to allocate node IDs 
is probably just to give each server a numeric ID, starting from 0.
+Each broker and each controller must set `controller.quorum.voters`.  Note 
that the node ID supplied in the `controller.quorum.voters` configuration must 
match that supplied to the server.
+So on controller1, node.id must be set to 1, and so forth.  Note that there is 
no requirement for controller IDs to start at 0 or 1.  However, the easiest and 
least confusing way to allocate
+node IDs is probably just to give each server a numeric ID, starting from 0.
 
 Note that clients never need to configure `controller.quorum.voters`; only 
servers do.
 
 ## Kafka Storage Tool
 As described above in the QuickStart section, you must use the 
`kafka-storage.sh` tool to generate a cluster ID for your new cluster, and then 
run the format command on each node before starting the node.
 
-This is different from how Kafka has operated in the past.  Previously, Kafka 
would format blank storage directories automatically, and also generate a new 
cluster UUID automatically.  One reason for the change is that auto-formatting 
can sometimes obscure an error condition.  For example, under UNIX, if a data 
directory can't be mounted, it may show up as blank.  In this case, 
auto-formatting would be the wrong thing to do.
+This is different from how Kafka has operated in the past.  Previously, Kafka 
would format blank storage directories automatically, and also generate a new 
cluster UUID automatically.  One reason for the change
+is that auto-formatting can sometimes obscure an error condition.  For 
example, under UNIX, if a data directory can't be mounted, it may show up as 
blank.  In this case, auto-formatting would be the wrong thing to do.
 
-This is particularly important for the metadata log maintained by the 
controller servers.  If two controllers out of three controllers were able to 
start with blank logs, a leader might be able to be elected with nothing in the 
log, which would cause all metadata to be lost.
+This is particularly important for the metadata log maintained by the 
controller servers.  If two controllers out of three controllers were able to 
start with blank logs, a leader might be able to be elected with
+nothing in the log, which would cause all metadata to be lost.
 
 # Missing Features
-We do not yet support generating or loading KIP-630 metadata snapshots.  This 
means that after a while, the time required to restart a broker will become 
very large.  This is a known issue and we are working on completing snapshots 
for the next release.
+We do not yet support generating or loading KIP-630 metadata snapshots.  This 
means that after a while, the time required to restart a broker will become 
very large.  This is a known issue and we are working on
+completing snapshots for the next release.

Review comment:
       just noting linebreak-only changes

##########
File path: config/raft/README.md
##########
@@ -65,16 +68,19 @@ controllers, you can tolerate 1 failure; with 5 
controllers, you can tolerate 2
 ## Process Roles
 Each Kafka server now has a new configuration key called `process.roles` which 
can have the following values:
 
-* If `process.roles` is set to `broker`, the server acts as a self-managed 
broker.
-* If `process.roles` is set to `controller`, the server acts as a self-managed 
controller.
-* If `process.roles` is set to `broker,controller`, the server acts as both a 
self-managed broker and a self-managed controller.
-* If `process.roles` is not set at all then we are assumed to be in ZooKeeper 
mode.  As mentioned earlier, you can't currently transition back and forth 
between ZK mode and self-managed mode without reformatting.
+* If `process.roles` is set to `broker`, the server acts as a broker.
+* If `process.roles` is set to `controller`, the server acts as a controller.
+* If `process.roles` is set to `broker,controller`, the server acts as both a 
broker and a controller.

Review comment:
       It feels to me that we need to mention "Raft mode" in these lines 
somehow.  Perhaps "`...the server acts as <a broker|a controller|both a broker 
and a controller> in Raft mode`?  Or maybe `acts in Raft mode as <a broker|a 
controller|both a broker and a controller>`?

##########
File path: config/raft/broker.properties
##########
@@ -14,13 +14,13 @@
 # limitations under the License.
 
 #
-# This configuration file is intended for use in self-managed mode, where
-# Apache ZooKeeper is not present.  See config/self-managed/README.md for 
details.
+# This configuration file is intended for use in Raft mode, where
+# Apache ZooKeeper is not present.  See config/raft/README.md for details.
 #
 
 ############################# Server Basics #############################
 
-# The role of this server. Setting this puts us in self-managed mode
+# The role of this server. Setting this puts us in raft mode

Review comment:
       `s/raft mode/Raft mode/` for consistency in capitalization




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to