Github user mmiklavc commented on a diff in the pull request:

    https://github.com/apache/incubator-metron/pull/521#discussion_r113203115
  
    --- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
    @@ -55,35 +112,245 @@ General Kerberization notes can be found in the 
metron-deployment [README.md](..
     
         ![enable keberos 
configure](../readme-images/enable-kerberos-configure-kerberos.png)
     
    -    c. Click through to “Start and Test Services.” Let the cluster 
spin up.
    +    c. Click through to “Start and Test Services.” Let the cluster 
spin up, but don't worry about starting up Metron via Ambari - we're going to 
run the parsers manually against the rest of the Hadoop cluster Kerberized. The 
wizard will fail at starting Metron, but this is OK. Click “continue.” When 
you’re finished, the custom storm-site should look similar to the following:
    +
    +    ![enable keberos 
configure](../readme-images/custom-storm-site-final.png)
    +
    +1. Create a Metron keytab
     
    -## Push Data
    -1. Kinit with the metron user
         ```
    -    kinit -kt /etc/security/keytabs/metron.headless.keytab 
met...@example.com
    +   kadmin.local -q "ktadd -k metron.headless.keytab met...@example.com"
    +   cp metron.headless.keytab /etc/security/keytabs
    +   chown metron:hadoop /etc/security/keytabs/metron.headless.keytab
    +   chmod 440 /etc/security/keytabs/metron.headless.keytab
    +   ```
    +
    +Kafka Authorization
    +-------------------
    +
    +1. Acquire a Kerberos ticket using the `metron` principal.
    +
         ```
    +   kinit -kt /etc/security/keytabs/metron.headless.keytab 
met...@example.com
    +   ```
    +
    +1. Create any additional Kafka topics that you will need. We need to 
create the topics before adding the required ACLs. The current full dev 
installation will deploy bro, snort, enrichments, and indexing only.  For 
example, you may want to add a topic for 'yaf' telemetry.
     
    -2. Push some sample data to one of the parser topics. E.g for bro we took 
raw data from 
[incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput](../../metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput)
         ```
    -    cat sample-bro.txt | 
${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh --broker-list 
${BROKERLIST}:6667 --security-protocol SASL_PLAINTEXT --topic bro
    +   ${KAFKA_HOME}/bin/kafka-topics.sh \
    +      --zookeeper ${ZOOKEEPER}:2181 \
    +      --create \
    +      --topic yaf \
    +      --partitions 1 \
    +      --replication-factor 1
    +   ```
    +
    +1. Setup Kafka ACLs for the `bro`, `snort`, `enrichments`, and `indexing` 
topics.  Run the same command against any additional topics that you might be 
using; for example `yaf`.
    +
         ```
    +   export KERB_USER=metron
    +
    +   for topic in bro snort enrichments indexing; do
    +           ${KAFKA_HOME}/bin/kafka-acls.sh \
    +          --authorizer kafka.security.auth.SimpleAclAuthorizer \
    +          --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 \
    +          --add \
    +          --allow-principal User:${KERB_USER} \
    +          --topic ${topic}
    +   done
    +   ```
    +
    +1. Setup Kafka ACLs for the consumer groups.  This command sets the ACLs 
for Bro, Snort, YAF, Enrichments, Indexing, and the Profiler.  Execute the same 
command for any additional Parsers that you may be running.
     
    -3. Wait a few moments for data to flow through the system and then check 
for data in the Elasticsearch indexes. Replace bro with whichever parser type 
you’ve chosen.
         ```
    -    curl -XGET "${ZOOKEEPER}:9200/bro*/_search"
    -    curl -XGET "${ZOOKEEPER}:9200/bro*/_count"
    +    export KERB_USER=metron
    +
    +   for group in bro_parser snort_parser yaf_parser enrichments indexing 
profiler; do
    +           ${KAFKA_HOME}/bin/kafka-acls.sh \
    +          --authorizer kafka.security.auth.SimpleAclAuthorizer \
    +          --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 \
    +          --add \
    +          --allow-principal User:${KERB_USER} \
    +          --group ${group}
    +   done
    +   ```
    +
    +1. Add the `metron` principal to the `kafka-cluster` ACL.
    +
    +    ```
    +   ${KAFKA_HOME}/bin/kafka-acls.sh \
    +        --authorizer kafka.security.auth.SimpleAclAuthorizer \
    +        --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 \
    +        --add \
    +        --allow-principal User:${KERB_USER} \
    +        --cluster kafka-cluster
    +   ```
    +
    +HBase Authorization
    +-------------------
    +
    +1. Acquire a Kerberos ticket using the `hbase` principal
    +
    +    ```
    +   kinit -kt /etc/security/keytabs/hbase.headless.keytab 
hbase-metron_clus...@example.com
    +   ```
    +
    +1. Grant permissions for the HBase tables used in Metron.
    +
    +    ```
    +   echo "grant 'metron', 'RW', 'threatintel'" | hbase shell
    +   echo "grant 'metron', 'RW', 'enrichment'" | hbase shell
    +   ```
    +
    +1. If you are using the Profiler, do the same for its HBase table.
    +
         ```
    +   echo "create 'profiler', 'P'" | hbase shell
    +   echo "grant 'metron', 'RW', 'profiler', 'P'" | hbase shell
    +   ```
     
    -4. You should have data flowing from the parsers all the way through to 
the indexes. This completes the Kerberization instructions
    +Storm Authorization
    +-------------------
    +
    +1. Switch to the `metron` user and acquire a Kerberos ticket for the 
`metron` principal.
    +
    +    ```
    +   su metron
    +   kinit -kt /etc/security/keytabs/metron.headless.keytab 
met...@example.com
    +   ```
    +
    +1. Create the directory `/home/metron/.storm` and switch to that directory.
    +
    +    ```
    +   mkdir /home/metron/.storm
    +   cd /home/metron/.storm
    +   ```
    +
    +1. Create a client JAAS file at `/home/metron/.storm/client_jaas.conf`.  
This should look identical to the Storm client JAAS file located at 
`/etc/storm/conf/client_jaas.conf` except for the addition of a `Client` 
stanza. The `Client` stanza is used for Zookeeper. All quotes and semicolons 
are necessary.
    +
    +    ```
    +    cat << EOF > client_jaas.conf
    +    StormClient {
    +        com.sun.security.auth.module.Krb5LoginModule required
    +        useTicketCache=true
    +        renewTicket=true
    +        serviceName="nimbus";
    +    };
    +    Client {
    +        com.sun.security.auth.module.Krb5LoginModule required
    +        useKeyTab=true
    +        keyTab="/etc/security/keytabs/metron.headless.keytab"
    +        storeKey=true
    +        useTicketCache=false
    +        serviceName="zookeeper"
    +        principal="met...@example.com";
    +    };
    +    KafkaClient {
    +        com.sun.security.auth.module.Krb5LoginModule required
    +        useKeyTab=true
    +        keyTab="/etc/security/keytabs/metron.headless.keytab"
    +        storeKey=true
    +        useTicketCache=false
    +        serviceName="kafka"
    +        principal="met...@example.com";
    +    };
    +    EOF
    +    ```
    +
    +1. Create a YAML file at `/home/metron/.storm/storm.yaml`.  This should 
point to the client JAAS file.  Set the array of nimbus hosts accordingly.
    +
    +    ```
    +    cat << EOF > /home/metron/.storm/storm.yaml
    +    nimbus.seeds : ['node1']
    +    java.security.auth.login.config : 
'/home/metron/.storm/client_jaas.conf'
    +    storm.thrift.transport : 
'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
    +    EOF
    +    ```
    +
    +1. Create an auxiliary storm configuration file at 
`/home/metron/storm-config.json`. Note the login config option in the file 
points to the client JAAS file.
    +
    +    ```
    +    cat << EOF > /home/metron/storm-config.json
    +    {
    +        "topology.worker.childopts" : 
"-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
    +    }
    +    EOF
    +    ```
    +
    +1. Configure the Enrichment, Indexing and the Profiler topologies to use 
the client JAAS file.  Add the following properties to each of the topology 
properties files.
    +
    +   ```
    +   kafka.security.protocol=PLAINTEXTSASL
    +   
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
    +   ```
    +
    +    * `${METRON_HOME}/config/enrichment.properties`
    +    * `${METRON_HOME}/config/elasticsearch.properties`
    +    * `${METRON_HOME}/config/profiler.properties`
    +
    +    Use the following command to automate this step.
    +
    +    ```
    +    for file in enrichment.properties elasticsearch.properties 
profiler.properties; do
    +      echo ${file}
    +      sed -i 
"s/^kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/" 
"${METRON_HOME}/config/${file}"
    +      sed -i 
"s/^topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/"
 "${METRON_HOME}/config/${file}"
    +    done
    +    ```
    +
    +Start Metron
    +------------
    +
    +1. Switch to the `metron` user and acquire a Kerberos ticket for the 
`metron` principal.
    +
    +    ```
    +   su metron
    +   kinit -kt /etc/security/keytabs/metron.headless.keytab 
met...@example.com
    +   ```
    +
    +1. Restart the parser topologies. Be sure to pass in the new parameter, 
`-ksp` or `--kafka_security_protocol`.  The following command will start only 
the Bro and Snort topologies.  Execute the same command for any other Parsers 
that you may need, for example `yaf`.  
    +
    +    ```
    +    for parser in bro snort; do
    +           ${METRON_HOME}/bin/start_parser_topology.sh -z 
${ZOOKEEPER}:2181 -s ${parser} -ksp SASL_PLAINTEXT -e 
/home/metron/storm-config.json;
    +    done
    +    ```
    +
    +1. Restart the Enrichment and Indexing topologies.
    +
    +    ```
    +   ${METRON_HOME}/bin/start_enrichment_topology.sh
    +   ${METRON_HOME}/bin/start_elasticsearch_topology.sh
    +   ```
    +
    +1. Push some sample data to one of the parser topics. E.g for Bro we took 
raw data from 
[incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput](../../metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput)
    +
    +    ```
    +   cat sample-bro.txt | 
${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh --broker-list 
${BROKERLIST}:6667 --security-protocol SASL_PLAINTEXT --topic bro
    +   ```
    +
    +1. Wait a few moments for data to flow through the system and then check 
for data in the Elasticsearch indices. Replace yaf with whichever parser type 
you’ve chosen.
    +
    +    ```
    +   curl -XGET "${ZOOKEEPER}:9200/bro*/_search"
    --- End diff --
    
    Just noticed this issue that came in with the original PR - should be 
ELASTICSEARCH, not ZOOKEEPER.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to