[ 
https://issues.apache.org/jira/browse/FLINK-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550701#comment-16550701
 ] 

ASF GitHub Bot commented on FLINK-8981:
---------------------------------------

Github user dawidwys commented on a diff in the pull request:

    https://github.com/apache/flink/pull/6377#discussion_r204020995
  
    --- Diff: 
flink-end-to-end-tests/test-scripts/docker-hadoop-secure-cluster/bootstrap.sh 
---
    @@ -0,0 +1,121 @@
    +#!/bin/bash
    
+################################################################################
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#     http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    
+################################################################################
    +
    +: ${HADOOP_PREFIX:=/usr/local/hadoop}
    +
    +$HADOOP_PREFIX/etc/hadoop/hadoop-env.sh
    +
    +rm /tmp/*.pid
    +
    +# installing libraries if any - (resource urls added comma separated to 
the ACP system variable)
    +cd $HADOOP_PREFIX/share/hadoop/common ; for cp in ${ACP//,/ }; do  echo == 
$cp; curl -LO $cp ; done; cd -
    +
    +# kerberos client
    +sed -i "s/EXAMPLE.COM/${KRB_REALM}/g" /etc/krb5.conf
    +sed -i "s/example.com/${DOMAIN_REALM}/g" /etc/krb5.conf
    +
    +# update config files
    +sed -i "s/HOSTNAME/$(hostname -f)/g" 
$HADOOP_PREFIX/etc/hadoop/core-site.xml
    +sed -i "s/EXAMPLE.COM/${KRB_REALM}/g" 
$HADOOP_PREFIX/etc/hadoop/core-site.xml
    +sed -i "s#/etc/security/keytabs#${KEYTAB_DIR}#g" 
$HADOOP_PREFIX/etc/hadoop/core-site.xml
    +
    +sed -i "s/EXAMPLE.COM/${KRB_REALM}/g" 
$HADOOP_PREFIX/etc/hadoop/hdfs-site.xml
    +sed -i "s/HOSTNAME/$(hostname -f)/g" 
$HADOOP_PREFIX/etc/hadoop/hdfs-site.xml
    +sed -i "s#/etc/security/keytabs#${KEYTAB_DIR}#g" 
$HADOOP_PREFIX/etc/hadoop/hdfs-site.xml
    +
    +sed -i "s/EXAMPLE.COM/${KRB_REALM}/g" 
$HADOOP_PREFIX/etc/hadoop/yarn-site.xml
    +sed -i "s/HOSTNAME/$(hostname -f)/g" 
$HADOOP_PREFIX/etc/hadoop/yarn-site.xml
    +sed -i "s#/etc/security/keytabs#${KEYTAB_DIR}#g" 
$HADOOP_PREFIX/etc/hadoop/yarn-site.xml
    +
    +sed -i "s/EXAMPLE.COM/${KRB_REALM}/g" 
$HADOOP_PREFIX/etc/hadoop/mapred-site.xml
    +sed -i "s/HOSTNAME/$(hostname -f)/g" 
$HADOOP_PREFIX/etc/hadoop/mapred-site.xml
    +sed -i "s#/etc/security/keytabs#${KEYTAB_DIR}#g" 
$HADOOP_PREFIX/etc/hadoop/mapred-site.xml
    +
    +sed -i 
"s#/usr/local/hadoop/bin/container-executor#${NM_CONTAINER_EXECUTOR_PATH}#g" 
$HADOOP_PREFIX/etc/hadoop/yarn-site.xml
    +
    +# create namenode kerberos principal and keytab
    +kadmin -p ${KERBEROS_ADMIN} -w ${KERBEROS_ADMIN_PASSWORD} -q "addprinc 
-randkey hdfs/$(hostname -f)@${KRB_REALM}"
    +kadmin -p ${KERBEROS_ADMIN} -w ${KERBEROS_ADMIN_PASSWORD} -q "addprinc 
-randkey mapred/$(hostname -f)@${KRB_REALM}"
    +kadmin -p ${KERBEROS_ADMIN} -w ${KERBEROS_ADMIN_PASSWORD} -q "addprinc 
-randkey yarn/$(hostname -f)@${KRB_REALM}"
    +kadmin -p ${KERBEROS_ADMIN} -w ${KERBEROS_ADMIN_PASSWORD} -q "addprinc 
-randkey HTTP/$(hostname -f)@${KRB_REALM}"
    +
    +kadmin -p ${KERBEROS_ADMIN} -w ${KERBEROS_ADMIN_PASSWORD} -q "xst -k 
hdfs.keytab hdfs/$(hostname -f) HTTP/$(hostname -f)"
    +kadmin -p ${KERBEROS_ADMIN} -w ${KERBEROS_ADMIN_PASSWORD} -q "xst -k 
mapred.keytab mapred/$(hostname -f) HTTP/$(hostname -f)"
    +kadmin -p ${KERBEROS_ADMIN} -w ${KERBEROS_ADMIN_PASSWORD} -q "xst -k 
yarn.keytab yarn/$(hostname -f) HTTP/$(hostname -f)"
    +
    +mkdir -p ${KEYTAB_DIR}
    +mv hdfs.keytab ${KEYTAB_DIR}
    +mv mapred.keytab ${KEYTAB_DIR}
    +mv yarn.keytab ${KEYTAB_DIR}
    +chmod 400 ${KEYTAB_DIR}/hdfs.keytab
    +chmod 400 ${KEYTAB_DIR}/mapred.keytab
    +chmod 400 ${KEYTAB_DIR}/yarn.keytab
    +chown hdfs:hadoop ${KEYTAB_DIR}/hdfs.keytab
    +chown mapred:hadoop ${KEYTAB_DIR}/mapred.keytab
    +chown yarn:hadoop ${KEYTAB_DIR}/yarn.keytab
    +
    +service ssh start
    --- End diff --
    
    Can we just make ssh start automatically in Dockerfile?


> Add end-to-end test for running on YARN with Kerberos
> -----------------------------------------------------
>
>                 Key: FLINK-8981
>                 URL: https://issues.apache.org/jira/browse/FLINK-8981
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Security, Tests
>    Affects Versions: 1.5.0
>            Reporter: Till Rohrmann
>            Assignee: Aljoscha Krettek
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 1.6.0
>
>
> We should add an end-to-end test which verifies Flink's integration with 
> Kerberos security. In order to do this, we should start a Kerberos secured 
> Hadoop, ZooKeeper and Kafka cluster. Then we should start a Flink cluster 
> with HA enabled and run a job which reads from and writes to Kafka. We could 
> use a simple pipe job for that purpose which has some state for checkpointing 
> to HDFS.
> See [security docs| 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/security-kerberos.html]
>  for how more information about Flink's Kerberos integration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to