Repository: kafka
Updated Branches:
  refs/heads/0.9.0 537aeae33 -> 26f797931


MINOR: Documentation improvements

* Fix typo in api.html
* Mark security features as beta quality (similar to new consumer). Is there 
better wording?
* Improve wording and clarify things in a number of places
* Improve layout of `pre` blocks (tested locally, which doesn't seem to use the 
same stylesheets as the deployed version)
* Use producer.config in console-producer.sh command
* Improve SASL documentation structure

Author: Ismael Juma <[email protected]>

Reviewers: Jun Rao, Magnus Edenhill, Gwen Shapira

Closes #550 from ijuma/documentation-improvements

(cherry picked from commit c7c7f4cfa7e1c385d5f5706161572d657b495b7a)
Signed-off-by: Gwen Shapira <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/26f79793
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/26f79793
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/26f79793

Branch: refs/heads/0.9.0
Commit: 26f7979318ada5ec1e0e4e40dee5ee1ca10facd1
Parents: 537aeae
Author: Ismael Juma <[email protected]>
Authored: Thu Nov 19 07:59:03 2015 -0800
Committer: Gwen Shapira <[email protected]>
Committed: Thu Nov 19 07:59:18 2015 -0800

----------------------------------------------------------------------
 docs/api.html      |   2 +-
 docs/security.html | 247 +++++++++++++++++++++++++-----------------------
 2 files changed, 132 insertions(+), 117 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/26f79793/docs/api.html
----------------------------------------------------------------------
diff --git a/docs/api.html b/docs/api.html
index 8d79b20..8a266b7 100644
--- a/docs/api.html
+++ b/docs/api.html
@@ -154,5 +154,5 @@ As of the 0.9.0 release we have added a replacement for our 
existing simple and
        &lt;/dependency&gt;
 </pre>
 
-Examples showing how to use the producer are given in the
+Examples showing how to use the consumer are given in the
 <a 
href="http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html";
 title="Kafka 0.9.0 Javadoc">javadocs</a>.

http://git-wip-us.apache.org/repos/asf/kafka/blob/26f79793/docs/security.html
----------------------------------------------------------------------
diff --git a/docs/security.html b/docs/security.html
index eb5dadb..b697d53 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -16,16 +16,17 @@
 -->
 
 <h3><a id="security_overview" href="#security_overview">7.1 Security 
Overview</a></h3>
-In release 0.9.0.0, the Kafka community added a number of features that, used 
either separately or together, increases security in a Kafka cluster. The 
following security measures are currently supported:
+In release 0.9.0.0, the Kafka community added a number of features that, used 
either separately or together, increases security in a Kafka cluster. These 
features are considered to be of beta quality. The following security measures 
are currently supported:
 <ol>
-    <li>Authenticating clients (Producers and consumers) connections to 
brokers, using either SSL or SASL (Kerberos)</li>
-    <li>Authorizing read / write operations by clients</li>
-    <li>Encryption of data sent between brokers and clients, or between 
brokers, using SSL (Note there is performance degradation in the clients when 
SSL is enabled. The magnitude of the degradation depends on the CPU type.)</li>
-    <li>Authenticate brokers connecting to ZooKeeper</li>
-    <li>Security is optional - non-secured clusters are supported, as well as 
a mix of authenticated, unauthenticated, encrypted and non-encrypted 
clients.</li>
-    <li>Authorization is pluggable and supports integration with external 
authorization services</li>
+    <li>Authentication of connections to brokers from clients (producers and 
consumers), other brokers and tools, using either SSL or SASL (Kerberos)</li>
+    <li>Authentication of connections from brokers to ZooKeeper</li>
+    <li>Encryption of data transferred between brokers and clients, between 
brokers, or between brokers and tools using SSL (Note that there is a 
performance degradation when SSL is enabled, the magnitude of which depends on 
the CPU type and the JVM implementation.)</li>
+    <li>Authorization of read / write operations by clients</li>
+    <li>Authorization is pluggable and integration with external authorization 
services is supported</li>
 </ol>
 
+It's worth noting that security is optional - non-secured clusters are 
supported, as well as a mix of authenticated, unauthenticated, encrypted and 
non-encrypted clients.
+
 The guides below explain how to configure and use the security features in 
both clients and brokers.
 
 <h3><a id="security_ssl" href="#security_ssl">7.2 Encryption and 
Authentication using SSL</a></h3>
@@ -35,7 +36,8 @@ Apache Kafka allows clients to connect over SSL. By default 
SSL is disabled but
     <li><h4><a id="security_ssl_key" href="#security_ssl_key">Generate SSL key 
and certificate for each Kafka broker</a></h4>
         The first step of deploying HTTPS is to generate the key and the 
certificate for each machine in the cluster. You can use Java’s keytool 
utility to accomplish this task.
         We will generate the key into a temporary keystore initially so that 
we can export and sign it later with CA.
-        <pre>$ keytool -keystore server.keystore.jks -alias localhost 
-validity {validity} -genkey</pre>
+        <pre>
+        keytool -keystore server.keystore.jks -alias localhost -validity 
{validity} -genkey</pre>
 
         You need to specify two parameters in the above command:
         <ol>
@@ -47,30 +49,34 @@ Apache Kafka allows clients to connect over SSL. By default 
SSL is disabled but
     <li><h4><a id="security_ssl_ca" href="#security_ssl_ca">Creating your own 
CA</a></h4>
         After the first step, each machine in the cluster has a public-private 
key pair, and a certificate to identify the machine. The certificate, however, 
is unsigned, which means that an attacker can create such a certificate to 
pretend to be any machine.<p>
         Therefore, it is important to prevent forged certificates by signing 
them for each machine in the cluster. A certificate authority (CA) is 
responsible for signing certificates. CA works likes a government that issues 
passports—the government stamps (signs) each passport so that the passport 
becomes difficult to forge. Other governments verify the stamps to ensure the 
passport is authentic. Similarly, the CA signs the certificates, and the 
cryptography guarantees that a signed certificate is computationally difficult 
to forge. Thus, as long as the CA is a genuine and trusted authority, the 
clients have high assurance that they are connecting to the authentic machines.
-        <pre>openssl req <b>-new</b> -x509 -keyout ca-key -out ca-cert -days 
365</pre>
+        <pre>
+        openssl req <b>-new</b> -x509 -keyout ca-key -out ca-cert -days 
365</pre>
 
         The generated CA is simply a public-private key pair and certificate, 
and it is intended to sign other certificates.<br>
 
         The next step is to add the generated CA to the **clients’ 
truststore** so that the clients can trust this CA:
-        <pre>keytool -keystore server.truststore.jks -alias CARoot 
<b>-import</b> -file ca-cert</pre>
+        <pre>
+        keytool -keystore server.truststore.jks -alias CARoot <b>-import</b> 
-file ca-cert</pre>
 
-        <b>Note:</b> If you configure Kafka brokers to require client 
authentication by setting ssl.client.auth to be "requested" or "required" on <a 
href="#config_broker">Kafka broker config</a> then you must provide a 
truststore for Kafka broker as well and it should have all the CA certificates 
that clients keys signed by.
-        <pre>keytool -keystore client.truststore.jks -alias CARoot -import 
-file ca-cert</pre>
+        <b>Note:</b> If you configure the Kafka brokers to require client 
authentication by setting ssl.client.auth to be "requested" or "required" on 
the <a href="#config_broker">Kafka brokers config</a> then you must provide a 
truststore for the Kafka brokers as well and it should have all the CA 
certificates that clients keys were signed by.
+        <pre>
+        keytool -keystore client.truststore.jks -alias CARoot -import -file 
ca-cert</pre>
 
-        In contrast to the keystore in step 1 that stores each machine’s own 
identity, the truststore of a client stores all the certificates that the 
client should trust. Importing a certificate into one’s truststore also means 
that trusting all certificates that are signed by that certificate. As the 
analogy above, trusting the government (CA) also means that trusting all 
passports (certificates) that it has issued. This attribute is called the 
chains of trust, and it is particularly useful when deploying SSL on a large 
Kafka cluster. You can sign all certificates in the cluster with a single CA, 
and have all machines share the same truststore that trusts the CA. That way 
all machines can authenticate all other machines.</li>
+        In contrast to the keystore in step 1 that stores each machine’s own 
identity, the truststore of a client stores all the certificates that the 
client should trust. Importing a certificate into one’s truststore also means 
trusting all certificates that are signed by that certificate. As the analogy 
above, trusting the government (CA) also means trusting all passports 
(certificates) that it has issued. This attribute is called the chain of trust, 
and it is particularly useful when deploying SSL on a large Kafka cluster. You 
can sign all certificates in the cluster with a single CA, and have all 
machines share the same truststore that trusts the CA. That way all machines 
can authenticate all other machines.</li>
 
     <li><h4><a id="security_ssl_signing" href="#security_ssl_signing">Signing 
the certificate</a></h4>
         The next step is to sign all certificates generated by step 1 with the 
CA generated in step 2. First, you need to export the certificate from the 
keystore:
-        <pre>keytool -keystore server.keystore.jks -alias localhost -certreq 
-file cert-file</pre>
+        <pre>
+        keytool -keystore server.keystore.jks -alias localhost -certreq -file 
cert-file</pre>
 
         Then sign it with the CA:
-        <pre>openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out 
cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}</pre>
+        <pre>
+        openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out 
cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}</pre>
 
         Finally, you need to import both the certificate of the CA and the 
signed certificate into the keystore:
         <pre>
-            $ keytool -keystore server.keystore.jks -alias CARoot -import 
-file ca-cert
-            $ keytool -keystore server.keystore.jks -alias localhost -import 
-file cert-signed
-        </pre>
+        keytool -keystore server.keystore.jks -alias CARoot -import -file 
ca-cert
+        keytool -keystore server.keystore.jks -alias localhost -import -file 
cert-signed</pre>
 
         The definitions of the parameters are the following:
         <ol>
@@ -95,15 +101,15 @@ Apache Kafka allows clients to connect over SSL. By 
default SSL is disabled but
         keytool -keystore server.keystore.jks -alias localhost -certreq -file 
cert-file
         openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out 
cert-signed -days 365 -CAcreateserial -passin pass:test1234
         keytool -keystore server.keystore.jks -alias CARoot -import -file 
ca-cert
-        keytool -keystore server.keystore.jks -alias localhost -import -file 
cert-signed
-                </pre></li>
-    <li><h4><a id="security_configbroker" 
href="#security_configbroker">Configuring Kafka Broker</a></h4>
-        Kafka Broker comes with the feature of listening on multiple ports 
thanks to [KAFKA-1809](https://issues.apache.org/jira/browse/KAFKA-1809).
+        keytool -keystore server.keystore.jks -alias localhost -import -file 
cert-signed</pre></li>
+    <li><h4><a id="security_configbroker" 
href="#security_configbroker">Configuring Kafka Brokers</a></h4>
+        Kafka Brokers support listening for connections on multiple ports.
         We need to configure the following property in server.properties, 
which must have one or more comma-separated values:
         <pre>listeners</pre>
 
         If SSL is not enabled for inter-broker communication (see below for 
how to enable it), both PLAINTEXT and SSL ports will be necessary.
-        <pre>listeners=PLAINTEXT://host.name:port,SSL://host.name:port</pre>
+        <pre>
+        listeners=PLAINTEXT://host.name:port,SSL://host.name:port</pre>
 
         Following SSL configs are needed on the broker side
         <pre>
@@ -111,25 +117,28 @@ Apache Kafka allows clients to connect over SSL. By 
default SSL is disabled but
         ssl.keystore.password = test1234
         ssl.key.password = test1234
         ssl.truststore.location = /var/private/ssl/kafka.server.truststore.jks
-        ssl.truststore.password = test1234
-        </pre>
+        ssl.truststore.password = test1234</pre>
 
         Optional settings that are worth considering:
         <ol>
-            <li>ssl.client.auth = none ("required" => client authentication is 
required, "requested" => client authentication is requested and client without 
certs can still connect when this option chosen")</li>
+            <li>ssl.client.auth = none ("required" => client authentication is 
required, "requested" => client authentication is requested and client without 
certs can still connect. The usage of "requested" is discouraged as it provides 
a false sense of security and misconfigured clients will still connect 
successfully.)</li>
             <li>ssl.cipher.suites = A cipher suite is a named combination of 
authentication, encryption, MAC and key exchange algorithm used to negotiate 
the security settings for a network connection using TLS or SSL network 
protocol. (Default is an empty list)</li>
-            <li>ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1 (list out the 
SSL protocols that you are going to accept from clients. Do note SSL is 
deprecated and using that in production is not recommended)</li>
-            <li> ssl.keystore.type = JKS</li>
+            <li>ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1 (list out the 
SSL protocols that you are going to accept from clients. Do note that SSL is 
deprecated in favor of TLS and using SSL in production is not recommended)</li>
+            <li>ssl.keystore.type = JKS</li>
             <li>ssl.truststore.type = JKS</li>
         </ol>
         If you want to enable SSL for inter-broker communication, add the 
following to the broker properties file (it defaults to PLAINTEXT)
-        <pre>security.inter.broker.protocol = SSL</pre>
+        <pre>
+        security.inter.broker.protocol = SSL</pre>
 
-        If you want to enable any cipher suites other than the defaults that 
comes with JVM like the ones listed here:
-        <a 
href="https://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html";>https://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html</a>
 you will need to install <b><a 
href="http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html";>Unlimited
 Strength Policy files</a></b><br>
+        <p>
+        Due to import regulations in some countries, the Oracle implementation 
limits the strength of cryptographic algorithms available by default. If 
stronger algorithms are needed (for example, AES with 256-bit keys), the <a 
href="http://www.oracle.com/technetwork/java/javase/downloads/index.html";>JCE 
Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed 
in the JDK/JRE. See the
+        <a 
href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html";>JCA
 Providers Documentation</a> for more information.
+        </p>
 
         Once you start the broker you should be able to see in the server.log
-        <pre>with addresses: PLAINTEXT -> 
EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> 
EndPoint(192.168.64.1,9093,SSL)</pre>
+        <pre>
+        with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL 
-> EndPoint(192.168.64.1,9093,SSL)</pre>
 
         To check quickly if  the server keystore and truststore are setup 
properly you can run the following command
         <pre>openssl s_client -debug -connect localhost:9093 -tls1</pre> 
(Note: TLSv1 should be listed under ssl.enabled.protocols)<br>
@@ -139,124 +148,130 @@ Apache Kafka allows clients to connect over SSL. By 
default SSL is disabled but
         {variable sized random bytes}
         -----END CERTIFICATE-----
         subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Sriharsha 
Chintalapani
-        issuer=/C=US/ST=CA/L=Santa 
Clara/O=org/OU=org/CN=kafka/[email protected]
-            </pre>
+        issuer=/C=US/ST=CA/L=Santa 
Clara/O=org/OU=org/CN=kafka/[email protected]</pre>
         If the certificate does not show up or if there are any other error 
messages than your keystore is not setup properly.</li>
 
-    <li><h4><a id="security_configclients" 
href="#security_configclients">Configuring Kafka Clients</a></h4>h4>
-        SSL is supported only for new Kafka Producer & Consumer, the older API 
is not supported. The configs for SSL will be same for both producer & 
consumer.<br>
+    <li><h4><a id="security_configclients" 
href="#security_configclients">Configuring Kafka Clients</a></h4>
+        SSL is supported only for the new Kafka Producer and Consumer, the 
older API is not supported. The configs for SSL will be same for both producer 
and consumer.<br>
         If client authentication is not required in the broker, then the 
following is a minimal configuration example:
         <pre>
         security.protocol = SSL
         ssl.truststore.location = 
"/var/private/ssl/kafka.client.truststore.jks"
-        ssl.truststore.password = "test1234"
-            </pre>
+        ssl.truststore.password = "test1234"</pre>
 
         If client authentication is required, then a keystore must be created 
like in step 1 and the following must also be configured:
-            <pre>
+        <pre>
         ssl.keystore.location = "/var/private/ssl/kafka.client.keystore.jks"
         ssl.keystore.password = "test1234"
-        ssl.key.password = "test1234"
-                </pre>
-        Other configuration settings that may also be needed depending on our 
requirements and the broker configuration:\
+        ssl.key.password = "test1234"</pre>
+        Other configuration settings that may also be needed depending on our 
requirements and the broker configuration:
             <ol>
                 <li>ssl.provider (Optional). The name of the security provider 
used for SSL connections. Default value is the default security provider of the 
JVM.</li>
                 <li>ssl.cipher.suites (Optional). A cipher suite is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol.</li>
-                <li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 **Should list 
at least one of the protocols configured on the broker side</li>
+                <li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should 
list at least one of the protocols configured on the broker side</li>
                 <li>ssl.truststore.type = "JKS"</li>
                 <li>ssl.keystore.type = "JKS"</li>
             </ol>
 <br>
         Examples using console-producer and console-consumer:
         <pre>
-            kafka-console-producer.sh --broker-list localhost:9093 --topic 
test --new-producer --producer-property "security.protocol=SSL"  
--producer-property "ssl.truststore.location=client.truststore.jks" 
--producer-property "ssl.truststore.password=test1234"
-
-            kafka-console-consumer.sh --bootstrap-server localhost:9093 
--topic test --new-consumer --consumer.config client-ssl.properties
-            </pre>
+        kafka-console-producer.sh --broker-list localhost:9093 --topic test 
--producer.config client-ssl.properties
+        kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic 
test --new-consumer --consumer.config client-ssl.properties</pre>
     </li>
 </ol>
 <h3><a id="security_sasl" href="#security_sasl">7.3 Authentication using 
SASL</a></h3>
 
 <ol>
-    <li><h4><a id="security_sasl_prereq" 
href="#security_sasl_prereq">Prerequisites</a></h4><br>
+    <li><h4><a id="security_sasl_prereq" 
href="#security_sasl_prereq">Prerequisites</a></h4>
     <ol>
         <li><b>Kerberos</b><br>
         If your organization is already using a Kerberos server (for example, 
by using Active Directory), there is no need to install a new server just for 
Kafka. Otherwise you will need to install one, your Linux vendor likely has 
packages for Kerberos and a short guide on how to install and configure it (<a 
href="https://help.ubuntu.com/community/Kerberos";>Ubuntu</a>, <a 
href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html";>Redhat</a>).
 Note that if you are using Oracle Java, you will need to download JCE policy 
files for your Java version and copy them to $JAVA_HOME/jre/lib/security.</li>
         <li><b>Create Kerberos Principals</b><br>
-        If you are using the organization's Kerberos or Active Directory 
server, ask your Kerberos administrator for a principal for each Kafka broker 
in your cluster and for every Linux user that will access Kafka with Kerberos 
authentication.</br>
-        If you installed your own Kerberos, you will need to create these 
principals yourself:</br>
-            <code>sudo /usr/sbin/kadmin.local -q 'addprinc -randkey 
kafka/hostname@domainname'<br>
-                sudo /usr/sbin/kadmin.local -q "ktadd -k 
/etc/security/keytabs/kafka.keytab kafka/hostname@domainname"</code></li>
-        <li><b>Make sure all hosts can be reachable using hostnames</b> - It 
is important in case of kerberos all your hosts can be resolved with their 
FQDNs.</li>
-        <li><b><a name="jaas_config_file">Creating JAAS Config File</a></b><br>
-            Each node in the cluster should have a JAAS file similar to the 
example below. Add this file to kafka/config dir:
+        If you are using the organization's Kerberos or Active Directory 
server, ask your Kerberos administrator for a principal for each Kafka broker 
in your cluster and for every operating system user that will access Kafka with 
Kerberos authentication (via clients and tools).</br>
+        If you have installed your own Kerberos, you will need to create these 
principals yourself using the following commands:
+            <pre>
+    sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
+    sudo /usr/sbin/kadmin.local -q "ktadd -k 
/etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"</pre></li>
+        <li><b>Make sure all hosts can be reachable using hostnames</b> - it 
is a Kerberos requirement that all your hosts can be resolved with their 
FQDNs.</li>
+    </ol>
+    <li><h4><a id="security_sasl_brokerconfig" 
href="#security_sasl_brokerconfig">Configuring Kafka Brokers</a></h4>
+    <ol>
+        <li>Add a suitably modified JAAS file similar to the one below to each 
Kafka broker's config directory, let's call it kafka_server_jaas.conf for this 
example (note that each broker should have its own keytab):
         <pre>
-            KafkaServer {
-                com.sun.security.auth.module.Krb5LoginModule required
-                useKeyTab=true
-                storeKey=true
-                serviceName="kafka"
-                keyTab="/etc/security/keytabs/kafka1.keytab"
-                principal="kafka/[email protected]";
-            };
-
-            Client {
-               com.sun.security.auth.module.Krb5LoginModule required
-               useKeyTab=true
-               storeKey=true
-               serviceName="zookeeper"
-               keyTab="/etc/security/keytabs/kafka1.keytab"
-               principal="kafka/[email protected]";
-            };
-
-            KafkaClient {
-               com.sun.security.auth.module.Krb5LoginModule required
-               useTicketCache=true
-               serviceName="kafka";
-            };
-        </pre>
-            <u>Important notes:</u>
-            <ol>
-                <li>KafkaServer is a section name in JAAS file used by 
KafkaServer/Broker. This section tells Kafka Server which principal to use and 
which keytab this principal is stored. It allows Kafka Server to login using 
the keytab specified in this section.</li>
-                <li>Client section is used to authenticate a SASL connection 
with zookeeper. It also allows a broker to set SASL ACL on zookeeper nodes 
which locks these nodes down so that only kafka broker can modify. It is 
necessary to have the same principal name across all the brokers. If you want 
to use a section name other than Client, then you need to set the system 
property <tt>zookeeper.sasl.client</tt> to the appropriate name (<i>e.g.</i>, 
<tt>-Dzookeeper.sasl.client=ZkClient</tt>).</li>
-                <li>KafkaClient section here describes how the clients like 
producer and consumer can connect to the Kafka Broker. Here we specified 
"useTicketCache=true" not a keytab this allows user to do kinit and run a 
kafka-console-consumer or kafka-console-producer to connect to broker. For a 
long running process one should create KafkaClient section similar to 
KafkaServer.</li>
-                <li>In KafkaServer and KafkaClient sections we've 
"serviceName" this should match principal name with which kafka broker is 
running. In the above example principal="kafka/[email protected]" 
so we've "kafka" which is matching the principalName.</li>
-            </ol>
+    KafkaServer {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/etc/security/keytabs/kafka_server.keytab"
+        principal="kafka/[email protected]";
+    };
+
+    # Zookeeper client authentication
+    Client {
+       com.sun.security.auth.module.Krb5LoginModule required
+       useKeyTab=true
+       storeKey=true
+       keyTab="/etc/security/keytabs/kafka_server.keytab"
+       principal="kafka/[email protected]";
+    };</pre>
+
         </li>
-        <li><h4><a id="security_sasl_jaas" href="#security_sasl_jaas">Creating 
Client Side JAAS Config</a></h4>
-        Clients (producers, consumers, connect workers, etc) will authenticate 
to the cluster with their own principal (usually with the same name as the user 
used for running the client), so obtain or create these principals as needed. 
Then create a JAAS file as follows:
+        <li>Pass the name of the JAAS file as a JVM parameter to each Kafka 
broker:
             <pre>
-                KafkaClient {
-                    com.sun.security.auth.module.Krb5LoginModule required
-                    useKeyTab=true
-                    storeKey=true
-                    serviceName="kafka"
-                    keyTab="/etc/security/keytabs/kafka1.keytab"
-                    principal="kafkaproducer/[email protected]";
-                };
-            </pre>
+    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre>
         </li>
-    </ol></li>
-    <li><h4><a id="security_sasl_brokerconfig" 
href="#security_sasl_brokerconfig">Configuring Kafka Brokers</a></h4>
-    <ol>
-        <li>Pass the name of the jaas file you created in <a 
href="#jaas_config_file">Creating JAAS Config File"</a> as a JVM parameter to 
the kafka broker: 
<pre>-Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf</pre></li>
-        <li>Make sure the keytabs configured in the kafka_jaas.conf are 
readable by the linux user who is starting kafka broker.</li>
-        <li>Configure a SASL port in server.properties, by adding the 
following to the <i>listeners</i> parameter, which contains one or more 
comma-separated values:
-            <pre>listeners=SASL_PLAINTEXT://host.name:port</pre>
-        If you are only configuring SASL port (or if you are very paranoid and 
want the Kafka brokers to authenticate each other using SASL) then make sure 
you set same SASL protocol for inter-broker communication:
-        <pre>security.inter.broker.protocol=SASL_PLAINTEXT</pre></li>
+        <li>Make sure the keytabs configured in the JAAS file are readable by 
the operating system user who is starting kafka broker.</li>
+        <li>Configure a SASL port in server.properties, by adding at least one 
of SASL_PLAINTEXT or SASL_SSL to the <i>listeners</i> parameter, which contains 
one or more comma-separated values:
+        <pre>
+    listeners=SASL_PLAINTEXT://host.name:port</pre>
+        If SASL_SSL is used, then <a href="#security_ssl">SSL must also be 
configured</a>.
+        If you are only configuring a SASL port (or if you want the Kafka 
brokers to authenticate each other using SASL) then make sure you set the same 
SASL protocol for inter-broker communication:
+        <pre>
+    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)</pre></li>
+
+        We must also configure the service name in server.properties, which 
should match the principal name of the kafka brokers. In the above example, 
principal is "kafka/[email protected]", so:
+        <pre>
+    sasl.kerberos.service.name="kafka"</pre>
+
+        <u>Important notes:</u>
+        <ol>
+            <li>KafkaServer is a section name in JAAS file used by each 
KafkaServer/Broker. This section tells the broker which principal to use and 
the location of the keytab where this principal is stored. It allows the broker 
to login using the keytab specified in this section.</li>
+            <li>Client section is used to authenticate a SASL connection with 
zookeeper. It also allows the brokers to set SASL ACL on zookeeper nodes which 
locks these nodes down so that only the brokers can modify it. It is necessary 
to have the same principal name across all brokers. If you want to use a 
section name other than Client, set the system property 
<tt>zookeeper.sasl.client</tt> to the appropriate name (<i>e.g.</i>, 
<tt>-Dzookeeper.sasl.client=ZkClient</tt>).</li>
+            <li>ZooKeeper uses "zookeeper" as the service name by default. If 
you want to change this, set the system property 
<tt>zookeeper.sasl.client.username</tt> to the appropriate name (<i>e.g.</i>, 
<tt>-Dzookeeper.sasl.client.username=zk</tt>).</li>
+        </ol>
 
     </ol>
-    </li>
     <li><h4><a id="security_sasl_clientconfig" 
href="#security_sasl_clientconfig">Configuring Kafka Clients</a></h4>
-        SASL authentication is only supported for new kafka producer and 
consumer, the older API is not supported.>br>
-        To configure SASL authentication on the clients:
+        SASL authentication is only supported for the new kafka producer and 
consumer, the older API is not supported. To configure SASL authentication on 
the clients:
         <ol>
-            <li>pass the name of the jaas file you created in <a 
href="#security_sasl_jaas">Creating Client Side JAAS Config"</a> as a JVM 
parameter to the client JVM:
-        
<pre>-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</pre></li>
-            <li>Make sure the keytabs configured in the kafka_client_jaas.conf 
are readable by the linux user who is starting kafka client.</li>
-            <li>Configure the following property in producer.properties or 
consumer.properties:
-                <pre>security.protocol=SASL_PLAINTEXT</pre></li>
+            <li>
+                Clients (producers, consumers, connect workers, etc) will 
authenticate to the cluster with their own principal (usually with the same 
name as the user running the client), so obtain or create these principals as 
needed. Then create a JAAS file for each principal.
+                The KafkaClient section describes how the clients like 
producer and consumer can connect to the Kafka Broker. The following is an 
example configuration for a client using a keytab (recommended for long-running 
processes):
+            <pre>
+    KafkaClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/etc/security/keytabs/kafka_client.keytab"
+        principal="[email protected]";
+    };</pre>
+
+            For command-line utilities like kafka-console-consumer or 
kafka-console-producer, kinit can be used along with "useTicketCache=true" as 
in:
+            <pre>
+    KafkaClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useTicketCache=true;
+    };</pre>
+            </li>
+            <li>Pass the name of the JAAS file as a JVM parameter to the 
client JVM:
+        <pre>
+    
-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</pre></li>
+            <li>Make sure the keytabs configured in the kafka_client_jaas.conf 
are readable by the operating system user who is starting kafka client.</li>
+            <li>Configure the following properties in producer.properties or 
consumer.properties:
+                <pre>
+    security.protocol=SASL_PLAINTEXT (or SASL_SSL)
+    sasl.kerberos.service.name="kafka"</pre>
+            </li>
         </ol></li>
 </ol>
 
@@ -406,10 +421,10 @@ To enable ZooKeeper authentication on brokers, there are 
two necessary steps:
        <li> Set the configuration property <tt>zookeeper.set.acl</tt> in each 
broker to true</li>
 </ol>
 
-The metadata stored in ZooKeeper is such that only brokers will be able to 
modify the corresponding znodes, but znodes are world readable. The rationale 
behind this decision is that the data stored in ZooKeeper is not sensitive, but 
inappropriate manipulation of znodes can cause cluster disruption.
+The metadata stored in ZooKeeper is such that only brokers will be able to 
modify the corresponding znodes, but znodes are world readable. The rationale 
behind this decision is that the data stored in ZooKeeper is not sensitive, but 
inappropriate manipulation of znodes can cause cluster disruption. We also 
recommend limiting the access to ZooKeeper via network segmentation (only 
brokers and some admin tools need access to ZooKeeper if the new consumer and 
new producer are used).
 
 <h4><a id="zk_authz_migration" href="#zk_authz_migration">7.5.2 Migrating 
clusters</a></h4>
-If you are running a version of Kafka that does not support security of simply 
with security disabled, and you want to make the cluster secure, then you need 
to execute the following steps to enable ZooKeeper authentication with minimal 
disruption to your operations:
+If you are running a version of Kafka that does not support security or simply 
with security disabled, and you want to make the cluster secure, then you need 
to execute the following steps to enable ZooKeeper authentication with minimal 
disruption to your operations:
 <ol>
        <li>Perform a rolling restart setting the JAAS login file, which 
enables brokers to authenticate. At the end of the rolling restart, brokers are 
able to manipulate znodes with strict ACLs, but they will not create znodes 
with those ACLs</li>
        <li>Perform a second rolling restart of brokers, this time setting the 
configuration parameter <tt>zookeeper.set.acl</tt> to true, which enables the 
use of secure ACLs when creating znodes</li>

Reply via email to