[ 
https://issues.apache.org/jira/browse/FLINK-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550580#comment-16550580
 ] 

ASF GitHub Bot commented on FLINK-8981:
---------------------------------------

Github user aljoscha commented on a diff in the pull request:

    https://github.com/apache/flink/pull/6377#discussion_r203990078
  
    --- Diff: flink-end-to-end-tests/test-scripts/test_yarn_kerberos_docker.sh 
---
    @@ -0,0 +1,104 @@
    +#!/usr/bin/env bash
    
+################################################################################
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#     http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    
+################################################################################
    +set -o pipefail
    +
    +source "$(dirname "$0")"/common.sh
    +
    +FLINK_TARBALL_DIR=$TEST_DATA_DIR
    +FLINK_TARBALL=flink.tar.gz
    +FLINK_DIRNAME=$(basename $FLINK_DIR)
    +
    +echo "Flink Tarball directory $FLINK_TARBALL_DIR"
    +echo "Flink tarball filename $FLINK_TARBALL"
    +echo "Flink distribution directory name $FLINK_DIRNAME"
    +echo "End-to-end directory $END_TO_END_DIR"
    +docker --version
    +docker-compose --version
    +
    +mkdir -p $FLINK_TARBALL_DIR
    +tar czf $FLINK_TARBALL_DIR/$FLINK_TARBALL -C $(dirname $FLINK_DIR) .
    +
    +echo "Building Hadoop Docker container"
    +until docker build -f 
$END_TO_END_DIR/test-scripts/docker-hadoop-secure-cluster/Dockerfile -t 
flink/docker-hadoop-secure-cluster:latest 
$END_TO_END_DIR/test-scripts/docker-hadoop-secure-cluster/; do
    +    # with all the downloading and ubuntu updating a lot of flakiness can 
happen, make sure
    +    # we don't immediately fail
    +    echo "Something went wrong while building the Docker image, retrying 
..."
    +    sleep 2
    +done
    +
    +echo "Starting Hadoop cluster"
    +docker-compose -f 
$END_TO_END_DIR/test-scripts/docker-hadoop-secure-cluster/docker-compose.yml up 
-d
    +
    +# make sure we stop our cluster at the end
    +function cluster_shutdown {
    +  # don't call ourselves again for another signal interruption
    +  trap "exit -1" INT
    +  # don't call ourselves again for normal exit
    +  trap "" EXIT
    +
    +  docker-compose -f 
$END_TO_END_DIR/test-scripts/docker-hadoop-secure-cluster/docker-compose.yml 
down
    +  rm $FLINK_TARBALL_DIR/$FLINK_TARBALL
    +}
    +trap cluster_shutdown INT
    +trap cluster_shutdown EXIT
    +
    +until docker cp $FLINK_TARBALL_DIR/$FLINK_TARBALL 
master:/home/hadoop-user/; do
    +    # we're retrying this one because we don't know yet if the container 
is ready
    +    echo "Uploading Flink tarball to docker master failed, retrying ..."
    +    sleep 5
    +done
    +
    +# now, at least the container is ready
    +docker exec -it master bash -c "tar xzf /home/hadoop-user/$FLINK_TARBALL 
--directory /home/hadoop-user/"
    +
    +docker exec -it master bash -c "echo \"security.kerberos.login.keytab: 
/home/hadoop-user/hadoop-user.keytab\" >> 
/home/hadoop-user/$FLINK_DIRNAME/conf/flink-conf.yaml"
    +docker exec -it master bash -c "echo \"security.kerberos.login.principal: 
hadoop-user\" >> /home/hadoop-user/$FLINK_DIRNAME/conf/flink-conf.yaml"
    +
    +echo "Flink config:"
    +docker exec -it master bash -c "cat 
/home/hadoop-user/$FLINK_DIRNAME/conf/flink-conf.yaml"
    +
    +# make the output path random, just in case it already exists, for example 
if we
    +# had cached docker containers
    +OUTPUT_PATH=hdfs:///user/hadoop-user/wc-out-$RANDOM
    +
    +# it's important to run this with higher parallelism, otherwise we might 
risk that
    +# JM and TM are on the same YARN node and that we therefore don't test the 
keytab shipping
    +until docker exec -it master bash -c "export HADOOP_CLASSPATH=\`hadoop 
classpath\` && /home/hadoop-user/$FLINK_DIRNAME/bin/flink run -m yarn-cluster 
-yn 3 -ys 1 -ytm 1200 -yjm 800 -p 3 
/home/hadoop-user/$FLINK_DIRNAME/examples/streaming/WordCount.jar --output 
$OUTPUT_PATH"; do
    +    echo "Running the Flink job failed, might be that the cluster is not 
ready yet, retrying ..."
    --- End diff --
    
    I'm afraid not, that's why there are the retries around the stuff that 
deals with HDFS/YARN.


> Add end-to-end test for running on YARN with Kerberos
> -----------------------------------------------------
>
>                 Key: FLINK-8981
>                 URL: https://issues.apache.org/jira/browse/FLINK-8981
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Security, Tests
>    Affects Versions: 1.5.0
>            Reporter: Till Rohrmann
>            Assignee: Aljoscha Krettek
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 1.6.0
>
>
> We should add an end-to-end test which verifies Flink's integration with 
> Kerberos security. In order to do this, we should start a Kerberos secured 
> Hadoop, ZooKeeper and Kafka cluster. Then we should start a Flink cluster 
> with HA enabled and run a job which reads from and writes to Kafka. We could 
> use a simple pipe job for that purpose which has some state for checkpointing 
> to HDFS.
> See [security docs| 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/security-kerberos.html]
>  for how more information about Flink's Kerberos integration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to