Repository: spark
Updated Branches:
  refs/heads/master cd92f25be -> fc8222298


[SPARK-25809][K8S][TEST] New K8S integration testing backends

## What changes were proposed in this pull request?

Currently K8S integration tests are hardcoded to use a `minikube` based 
backend.  `minikube` is VM based so can be resource hungry and also doesn't 
cope well with certain networking setups (for example using Cisco AnyConnect 
software VPN `minikube` is unusable as it detects its own IP incorrectly).

This PR Adds a new K8S integration testing backend that allows for using the 
Kubernetes support in [Docker for 
Desktop](https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/).
  It also generalises the framework to be able to run the integration tests 
against an arbitrary Kubernetes cluster.

To Do:

- [x] General Kubernetes cluster backend
- [x] Documentation on Kubernetes integration testing
- [x] Testing of general K8S backend
- [x] Check whether change from timestamps being `Time` to `String` in Fabric 8 
upgrade needs additional fix up

## How was this patch tested?

Ran integration tests with Docker for Desktop and all passed:

![screen shot 2018-10-23 at 14 19 
56](https://user-images.githubusercontent.com/2104864/47363460-c5816a00-d6ce-11e8-9c15-56b34698e797.png)

Suggested Reviewers: ifilonenko srowen

Author: Rob Vesse <rve...@dotnetrdf.org>

Closes #22805 from rvesse/SPARK-25809.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/fc822229
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/fc822229
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/fc822229

Branch: refs/heads/master
Commit: fc8222298e26d9e4bb9ea1c0baa48cadba8ca673
Parents: cd92f25
Author: Rob Vesse <rve...@dotnetrdf.org>
Authored: Thu Nov 1 09:33:55 2018 -0700
Committer: mcheah <mch...@palantir.com>
Committed: Thu Nov 1 09:33:55 2018 -0700

----------------------------------------------------------------------
 .../k8s/SparkKubernetesClientFactory.scala      |   5 +
 .../k8s/submit/LoggingPodStatusWatcher.scala    |   3 -
 .../kubernetes/integration-tests/README.md      | 183 +++++++++++++++++--
 .../dev/dev-run-integration-tests.sh            |  10 +
 .../kubernetes/integration-tests/pom.xml        |  10 +
 .../scripts/setup-integration-test-env.sh       |  43 +++--
 .../k8s/integrationtest/KubernetesSuite.scala   |   3 +-
 .../KubernetesTestComponents.scala              |   5 +-
 .../k8s/integrationtest/ProcessUtils.scala      |   5 +-
 .../deploy/k8s/integrationtest/TestConfig.scala |   6 +-
 .../k8s/integrationtest/TestConstants.scala     |  15 +-
 .../backend/IntegrationTestBackend.scala        |  21 ++-
 .../backend/cloud/KubeConfigBackend.scala       |  70 +++++++
 .../docker/DockerForDesktopBackend.scala        |  25 +++
 14 files changed, 356 insertions(+), 48 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
index c47e78c..77bd66b 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
@@ -42,6 +42,9 @@ private[spark] object SparkKubernetesClientFactory {
       sparkConf: SparkConf,
       defaultServiceAccountToken: Option[File],
       defaultServiceAccountCaCert: Option[File]): KubernetesClient = {
+
+    // TODO [SPARK-25887] Support configurable context
+
     val oauthTokenFileConf = 
s"$kubernetesAuthConfPrefix.$OAUTH_TOKEN_FILE_CONF_SUFFIX"
     val oauthTokenConf = s"$kubernetesAuthConfPrefix.$OAUTH_TOKEN_CONF_SUFFIX"
     val oauthTokenFile = sparkConf.getOption(oauthTokenFileConf)
@@ -63,6 +66,8 @@ private[spark] object SparkKubernetesClientFactory {
       .getOption(s"$kubernetesAuthConfPrefix.$CLIENT_CERT_FILE_CONF_SUFFIX")
     val dispatcher = new Dispatcher(
       ThreadUtils.newDaemonCachedThreadPool("kubernetes-dispatcher"))
+
+    // TODO [SPARK-25887] Create builder in a way that respects configurable 
context
     val config = new ConfigBuilder()
       .withApiVersion("v1")
       .withMasterUrl(master)

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala
index 79b55bc..a2430c0 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala
@@ -18,13 +18,10 @@ package org.apache.spark.deploy.k8s.submit
 
 import java.util.concurrent.{CountDownLatch, TimeUnit}
 
-import scala.collection.JavaConverters._
-
 import io.fabric8.kubernetes.api.model.Pod
 import io.fabric8.kubernetes.client.{KubernetesClientException, Watcher}
 import io.fabric8.kubernetes.client.Watcher.Action
 
-import org.apache.spark.SparkException
 import org.apache.spark.deploy.k8s.KubernetesUtils._
 import org.apache.spark.internal.Logging
 import org.apache.spark.util.ThreadUtils

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/README.md
----------------------------------------------------------------------
diff --git a/resource-managers/kubernetes/integration-tests/README.md 
b/resource-managers/kubernetes/integration-tests/README.md
index b3863e6..64f8e77 100644
--- a/resource-managers/kubernetes/integration-tests/README.md
+++ b/resource-managers/kubernetes/integration-tests/README.md
@@ -8,26 +8,59 @@ title: Spark on Kubernetes Integration Tests
 Note that the integration test framework is currently being heavily revised and
 is subject to change. Note that currently the integration tests only run with 
Java 8.
 
-The simplest way to run the integration tests is to install and run Minikube, 
then run the following:
+The simplest way to run the integration tests is to install and run Minikube, 
then run the following from this
+directory:
 
     dev/dev-run-integration-tests.sh
 
 The minimum tested version of Minikube is 0.23.0. The kube-dns addon must be 
enabled. Minikube should
-run with a minimum of 3 CPUs and 4G of memory:
+run with a minimum of 4 CPUs and 6G of memory:
 
-    minikube start --cpus 3 --memory 4096
+    minikube start --cpus 4 --memory 6144
 
 You can download Minikube 
[here](https://github.com/kubernetes/minikube/releases).
 
 # Integration test customization
 
-Configuration of the integration test runtime is done through passing 
different arguments to the test script. The main useful options are outlined 
below.
+Configuration of the integration test runtime is done through passing 
different arguments to the test script. 
+The main useful options are outlined below.
+
+## Using a different backend
+
+The integration test backend i.e. the K8S cluster used for testing is 
controlled by the `--deploy-mode` option.  By 
+default this is set to `minikube`, the available backends are their 
perequisites are as follows.  
+
+### `minikube`
+
+Uses the local `minikube` cluster, this requires that `minikube` 0.23.0 or 
greater be installed and that it be allocated 
+at least 4 CPUs and 6GB memory (some users have reported success with as few 
as 3 CPUs and 4GB memory).  The tests will 
+check if `minikube` is started and abort early if it isn't currently running.
+
+### `docker-for-desktop`
+
+Since July 2018 Docker for Desktop provide an optional Kubernetes cluster that 
can be enabled as described in this 
+[blog 
post](https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/).
  Assuming 
+this is enabled using this backend will auto-configure itself from the 
`docker-for-desktop` context that Docker creates 
+in your `~/.kube/config` file. If your config file is in a different location 
you should set the `KUBECONFIG` 
+environment variable appropriately.
+
+### `cloud` 
+
+These cloud backend configures the tests to use an arbitrary Kubernetes 
cluster running in the cloud or otherwise.
+
+The `cloud` backend auto-configures the cluster to use from your K8S config 
file, this is assumed to be `~/.kube/config`
+unless the `KUBECONFIG` environment variable is set to override this location. 
 By default this will use whatever your 
+current context is in the config file, to use an alternative context from your 
config file you can specify the 
+`--context <context>` flag with the desired context.
+
+You can optionally use a different K8S master URL than the one your K8S config 
file specified, this should be supplied 
+via the `--spark-master <master-url>` flag.
 
 ## Re-using Docker Images
 
 By default, the test framework will build new Docker images on every test 
execution. A unique image tag is generated,
-and it is written to file at `target/imageTag.txt`. To reuse the images built 
in a previous run, or to use a Docker image tag
-that you have built by other means already, pass the tag to the test script:
+and it is written to file at `target/imageTag.txt`. To reuse the images built 
in a previous run, or to use a Docker 
+image tag that you have built by other means already, pass the tag to the test 
script:
 
     dev/dev-run-integration-tests.sh --image-tag <tag>
 
@@ -37,16 +70,140 @@ where if you still want to use images that were built 
before by the test framewo
 
 ## Spark Distribution Under Test
 
-The Spark code to test is handed to the integration test system via a tarball. 
Here is the option that is used to specify the tarball:
+The Spark code to test is handed to the integration test system via a tarball. 
Here is the option that is used to 
+specify the tarball:
 
 * `--spark-tgz <path-to-tgz>` - set `<path-to-tgz>` to point to a tarball 
containing the Spark distribution to test.
 
-TODO: Don't require the packaging of the built Spark artifacts into this 
tarball, just read them out of the current tree.
+This Tarball should be created by first running `dev/make-distribution.sh` 
passing the `--tgz` flag and `-Pkubernetes` 
+as one of the options to ensure that Kubernetes support is included in the 
distribution.  For more details on building a
+runnable distribution please see the 
+[Building 
Spark](https://spark.apache.org/docs/latest/building-spark.html#building-a-runnable-distribution)
 
+documentation.
+
+**TODO:** Don't require the packaging of the built Spark artifacts into this 
tarball, just read them out of the current 
+tree.
 
 ## Customizing the Namespace and Service Account
 
-* `--namespace <namespace>` - set `<namespace>` to the namespace in which the 
tests should be run.
-* `--service-account <service account name>` - set `<service account name>` to 
the name of the Kubernetes service account to
-use in the namespace specified by the `--namespace`. The service account is 
expected to have permissions to get, list, watch,
-and create pods. For clusters with RBAC turned on, it's important that the 
right permissions are granted to the service account
-in the namespace through an appropriate role and role binding. A reference 
RBAC configuration is provided in `dev/spark-rbac.yaml`.
+If no namespace is specified then a temporary namespace will be created and 
deleted during the test run.  Similarly if 
+no service account is specified then the `default` service account for the 
namespace will be used.
+
+Using the `--namespace <namespace>` flag sets `<namespace>` to the namespace 
in which the tests should be run.  If this 
+is supplied then the tests assume this namespace exists in the K8S cluster and 
will not attempt to create it.  
+Additionally this namespace must have an appropriately authorized service 
account which can be customised via the 
+`--service-account` flag.
+
+The `--service-account <service account name>` flag sets `<service account 
name>` to the name of the Kubernetes service 
+account to use in the namespace specified by the `--namespace` flag. The 
service account is expected to have permissions
+to get, list, watch, and create pods. For clusters with RBAC turned on, it's 
important that the right permissions are 
+granted to the service account in the namespace through an appropriate role 
and role binding. A reference RBAC 
+configuration is provided in `dev/spark-rbac.yaml`.
+
+# Running the Test Directly
+
+If you prefer to run just the integration tests directly, then you can 
customise the behaviour via passing system 
+properties to Maven.  For example:
+
+    mvn integration-test -am -pl :spark-kubernetes-integration-tests_2.11 \
+                            -Pkubernetes -Pkubernetes-integration-tests \ 
+                            -Phadoop-2.7 -Dhadoop.version=2.7.3 \
+                            
-Dspark.kubernetes.test.sparkTgz=spark-3.0.0-SNAPSHOT-bin-example.tgz \
+                            -Dspark.kubernetes.test.imageTag=sometag \
+                            
-Dspark.kubernetes.test.imageRepo=docker.io/somerepo \
+                            -Dspark.kubernetes.test.namespace=spark-int-tests \
+                            
-Dspark.kubernetes.test.deployMode=docker-for-desktop \
+                            -Dtest.include.tags=k8s
+                            
+                            
+## Available Maven Properties
+
+The following are the available Maven properties that can be passed.  For the 
most part these correspond to flags passed 
+to the wrapper scripts and using the wrapper scripts will simply set these 
appropriately behind the scenes.
+
+<table>
+  <tr>
+    <th>Property</th>
+    <th>Description</th>
+    <th>Default</th>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.sparkTgz</code></td>
+    <td>
+      A runnable Spark distribution to test.
+    </td>
+    <td></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.unpackSparkDir</code></td>
+    <td>
+      The directory where the runnable Spark distribution will be unpacked.
+    </td>
+    <td><code>${project.build.directory}/spark-dist-unpacked</code></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.deployMode</code></td>
+    <td>
+      The integration test backend to use.  Acceptable values are 
<code>minikube</code>, 
+      <code>docker-for-desktop</code> and <code>cloud</code>.
+    <td><code>minikube</code></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.kubeConfigContext</code></td>
+    <td>
+      When using the <code>cloud</code> backend specifies the context from the 
users K8S config file that should be used
+      as the target cluster for integration testing.  If not set and using the 
<code>cloud</code> backend then your 
+      current context will be used.
+    </td>
+    <td></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.master</code></td>
+    <td>
+      When using the <code>cloud-url</code> backend must be specified to 
indicate the K8S master URL to communicate 
+      with.
+    </td>
+    <td></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.imageTag</code></td>
+    <td>
+      A specific image tag to use, when set assumes images with those tags are 
already built and available in the 
+      specified image repository.  When set to <code>N/A</code> (the default) 
fresh images will be built.
+    </td>
+    <td><code>N/A</code>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.imageTagFile</code></td>
+    <td>
+      A file containing the image tag to use, if no specific image tag is set 
then fresh images will be built with a 
+      generated tag and that tag written to this file.
+    </td>
+    <td><code>${project.build.directory}/imageTag.txt</code></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.imageRepo</code></td>
+    <td>
+      The Docker image repository that contains the images to be used if a 
specific image tag is set or to which the 
+      images will be pushed to if fresh images are being built.
+    </td>
+    <td><code>docker.io/kubespark</code></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.namespace</code></td>
+    <td>
+      A specific Kubernetes namespace to run the tests in.  If specified then 
the tests assume that this namespace 
+      already exists. When not specified a temporary namespace for the tests 
will be created and deleted as part of the
+      test run.
+    </td>
+    <td></td>
+  </tr>
+  <tr>
+    <td><code>spark.kubernetes.test.serviceAccountName</code></td>
+    <td>
+      A specific Kubernetes service account to use for running the tests.  If 
not specified then the namespaces default
+      service account will be used and that must have sufficient permissions 
or the tests will fail.
+    </td>
+    <td></td>
+  </tr>
+</table>

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh
 
b/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh
index c3c843e..3c7cc93 100755
--- 
a/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh
+++ 
b/resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh
@@ -26,6 +26,7 @@ IMAGE_TAG="N/A"
 SPARK_MASTER=
 NAMESPACE=
 SERVICE_ACCOUNT=
+CONTEXT=
 INCLUDE_TAGS="k8s"
 EXCLUDE_TAGS=
 SCALA_VERSION="$($TEST_ROOT_DIR/build/mvn 
org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate 
-Dexpression=scala.binary.version | grep -v '\[' )"
@@ -61,6 +62,10 @@ while (( "$#" )); do
       SERVICE_ACCOUNT="$2"
       shift
       ;;
+    --context)
+      CONTEXT="$2"
+      shift
+      ;;
     --include-tags)
       INCLUDE_TAGS="k8s,$2"
       shift
@@ -94,6 +99,11 @@ then
   properties=( ${properties[@]} 
-Dspark.kubernetes.test.serviceAccountName=$SERVICE_ACCOUNT )
 fi
 
+if [ -n $CONTEXT ];
+then
+  properties=( ${properties[@]} 
-Dspark.kubernetes.test.kubeConfigContext=$CONTEXT )
+fi
+
 if [ -n $SPARK_MASTER ];
 then
   properties=( ${properties[@]} -Dspark.kubernetes.test.master=$SPARK_MASTER )

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/pom.xml
----------------------------------------------------------------------
diff --git a/resource-managers/kubernetes/integration-tests/pom.xml 
b/resource-managers/kubernetes/integration-tests/pom.xml
index a07fe1f..07288c9 100644
--- a/resource-managers/kubernetes/integration-tests/pom.xml
+++ b/resource-managers/kubernetes/integration-tests/pom.xml
@@ -33,11 +33,20 @@
     <scala-maven-plugin.version>3.2.2</scala-maven-plugin.version>
     <scalatest-maven-plugin.version>1.0</scalatest-maven-plugin.version>
     <sbt.project.name>kubernetes-integration-tests</sbt.project.name>
+
+    <!-- Integration Test Configuration Properties -->
+    <!-- Please see README.md in this directory for explanation of these -->
+    <spark.kubernetes.test.sparkTgz></spark.kubernetes.test.sparkTgz>
     
<spark.kubernetes.test.unpackSparkDir>${project.build.directory}/spark-dist-unpacked</spark.kubernetes.test.unpackSparkDir>
     <spark.kubernetes.test.imageTag>N/A</spark.kubernetes.test.imageTag>
     
<spark.kubernetes.test.imageTagFile>${project.build.directory}/imageTag.txt</spark.kubernetes.test.imageTagFile>
     
<spark.kubernetes.test.deployMode>minikube</spark.kubernetes.test.deployMode>
     
<spark.kubernetes.test.imageRepo>docker.io/kubespark</spark.kubernetes.test.imageRepo>
+    
<spark.kubernetes.test.kubeConfigContext></spark.kubernetes.test.kubeConfigContext>
+    <spark.kubernetes.test.master></spark.kubernetes.test.master>
+    <spark.kubernetes.test.namespace></spark.kubernetes.test.namespace>
+    
<spark.kubernetes.test.serviceAccountName></spark.kubernetes.test.serviceAccountName>
+
     <test.exclude.tags></test.exclude.tags>
     <test.include.tags></test.include.tags>
   </properties>
@@ -135,6 +144,7 @@
             
<spark.kubernetes.test.unpackSparkDir>${spark.kubernetes.test.unpackSparkDir}</spark.kubernetes.test.unpackSparkDir>
             
<spark.kubernetes.test.imageRepo>${spark.kubernetes.test.imageRepo}</spark.kubernetes.test.imageRepo>
             
<spark.kubernetes.test.deployMode>${spark.kubernetes.test.deployMode}</spark.kubernetes.test.deployMode>
+            
<spark.kubernetes.test.kubeConfigContext>${spark.kubernetes.test.kubeConfigContext}</spark.kubernetes.test.kubeConfigContext>
             
<spark.kubernetes.test.master>${spark.kubernetes.test.master}</spark.kubernetes.test.master>
             
<spark.kubernetes.test.namespace>${spark.kubernetes.test.namespace}</spark.kubernetes.test.namespace>
             
<spark.kubernetes.test.serviceAccountName>${spark.kubernetes.test.serviceAccountName}</spark.kubernetes.test.serviceAccountName>

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
 
b/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
index ccfb8e7..a4a9f5b 100755
--- 
a/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
+++ 
b/resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
@@ -71,19 +71,36 @@ if [[ $IMAGE_TAG == "N/A" ]];
 then
   IMAGE_TAG=$(uuidgen);
   cd $UNPACKED_SPARK_TGZ
-  if [[ $DEPLOY_MODE == cloud ]] ;
-  then
-    $UNPACKED_SPARK_TGZ/bin/docker-image-tool.sh -r $IMAGE_REPO -t $IMAGE_TAG 
build
-    if  [[ $IMAGE_REPO == gcr.io* ]] ;
-    then
-      gcloud docker -- push $IMAGE_REPO/spark:$IMAGE_TAG
-    else
-      $UNPACKED_SPARK_TGZ/bin/docker-image-tool.sh -r $IMAGE_REPO -t 
$IMAGE_TAG push
-    fi
-  else
-    # -m option for minikube.
-    $UNPACKED_SPARK_TGZ/bin/docker-image-tool.sh -m -r $IMAGE_REPO -t 
$IMAGE_TAG build
-  fi
+
+  case $DEPLOY_MODE in
+    cloud)
+      # Build images
+      $UNPACKED_SPARK_TGZ/bin/docker-image-tool.sh -r $IMAGE_REPO -t 
$IMAGE_TAG build
+
+      # Push images appropriately
+      if [[ $IMAGE_REPO == gcr.io* ]] ;
+      then
+        gcloud docker -- push $IMAGE_REPO/spark:$IMAGE_TAG
+      else
+        $UNPACKED_SPARK_TGZ/bin/docker-image-tool.sh -r $IMAGE_REPO -t 
$IMAGE_TAG push
+      fi
+      ;;
+
+    docker-for-desktop)
+       # Only need to build as this will place it in our local Docker repo 
which is all
+       # we need for Docker for Desktop to work so no need to also push
+       $UNPACKED_SPARK_TGZ/bin/docker-image-tool.sh -r $IMAGE_REPO -t 
$IMAGE_TAG build
+       ;;
+
+    minikube)
+       # Only need to build and if we do this with the -m option for minikube 
we will
+       # build the images directly using the minikube Docker daemon so no need 
to push
+       $UNPACKED_SPARK_TGZ/bin/docker-image-tool.sh -m -r $IMAGE_REPO -t 
$IMAGE_TAG build
+       ;;
+    *)
+       echo "Unrecognized deploy mode $DEPLOY_MODE" && exit 1
+       ;;
+  esac
   cd -
 fi
 

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
index e2e5880..6aa1d57 100644
--- 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
@@ -33,6 +33,7 @@ import scala.collection.JavaConverters._
 
 import org.apache.spark.SparkFunSuite
 import org.apache.spark.deploy.k8s.integrationtest.TestConfig._
+import org.apache.spark.deploy.k8s.integrationtest.TestConstants._
 import 
org.apache.spark.deploy.k8s.integrationtest.backend.{IntegrationTestBackend, 
IntegrationTestBackendFactory}
 import org.apache.spark.internal.Logging
 
@@ -77,7 +78,7 @@ private[spark] class KubernetesSuite extends SparkFunSuite
       System.clearProperty(key)
     }
 
-    val sparkDirProp = 
System.getProperty("spark.kubernetes.test.unpackSparkDir")
+    val sparkDirProp = System.getProperty(CONFIG_KEY_UNPACK_DIR)
     require(sparkDirProp != null, "Spark home directory must be provided in 
system properties.")
     sparkHomeDir = Paths.get(sparkDirProp)
     require(sparkHomeDir.toFile.isDirectory,

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesTestComponents.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesTestComponents.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesTestComponents.scala
index 5615d61..c0b435e 100644
--- 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesTestComponents.scala
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesTestComponents.scala
@@ -25,15 +25,16 @@ import scala.collection.mutable
 import io.fabric8.kubernetes.client.DefaultKubernetesClient
 import org.scalatest.concurrent.Eventually
 
+import org.apache.spark.deploy.k8s.integrationtest.TestConstants._
 import org.apache.spark.internal.Logging
 
 private[spark] class KubernetesTestComponents(defaultClient: 
DefaultKubernetesClient) {
 
-  val namespaceOption = 
Option(System.getProperty("spark.kubernetes.test.namespace"))
+  val namespaceOption = Option(System.getProperty(CONFIG_KEY_KUBE_NAMESPACE))
   val hasUserSpecifiedNamespace = namespaceOption.isDefined
   val namespace = 
namespaceOption.getOrElse(UUID.randomUUID().toString.replaceAll("-", ""))
   val serviceAccountName =
-    Option(System.getProperty("spark.kubernetes.test.serviceAccountName"))
+    Option(System.getProperty(CONFIG_KEY_KUBE_SVC_ACCOUNT))
       .getOrElse("default")
   val kubernetesClient = defaultClient.inNamespace(namespace)
   val clientConfig = kubernetesClient.getConfiguration

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/ProcessUtils.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/ProcessUtils.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/ProcessUtils.scala
index d8f3a6c..004a942 100644
--- 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/ProcessUtils.scala
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/ProcessUtils.scala
@@ -28,7 +28,7 @@ object ProcessUtils extends Logging {
    * executeProcess is used to run a command and return the output if it
    * completes within timeout seconds.
    */
-  def executeProcess(fullCommand: Array[String], timeout: Long): Seq[String] = 
{
+  def executeProcess(fullCommand: Array[String], timeout: Long, dumpErrors: 
Boolean = false): Seq[String] = {
     val pb = new ProcessBuilder().command(fullCommand: _*)
     pb.redirectErrorStream(true)
     val proc = pb.start()
@@ -40,7 +40,8 @@ object ProcessUtils extends Logging {
       })
     assert(proc.waitFor(timeout, TimeUnit.SECONDS),
       s"Timed out while executing ${fullCommand.mkString(" ")}")
-    assert(proc.exitValue == 0, s"Failed to execute ${fullCommand.mkString(" 
")}")
+    assert(proc.exitValue == 0,
+      s"Failed to execute ${fullCommand.mkString(" ")}${if (dumpErrors) "\n" + 
outputLines.mkString("\n")}")
     outputLines
   }
 }

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConfig.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConfig.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConfig.scala
index 5a49e07..363ec0a 100644
--- 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConfig.scala
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConfig.scala
@@ -21,9 +21,11 @@ import java.io.File
 import com.google.common.base.Charsets
 import com.google.common.io.Files
 
+import org.apache.spark.deploy.k8s.integrationtest.TestConstants._
+
 object TestConfig {
   def getTestImageTag: String = {
-    val imageTagFileProp = 
System.getProperty("spark.kubernetes.test.imageTagFile")
+    val imageTagFileProp = System.getProperty(CONFIG_KEY_IMAGE_TAG_FILE)
     require(imageTagFileProp != null, "Image tag file must be provided in 
system properties.")
     val imageTagFile = new File(imageTagFileProp)
     require(imageTagFile.isFile, s"No file found for image tag at 
${imageTagFile.getAbsolutePath}.")
@@ -31,7 +33,7 @@ object TestConfig {
   }
 
   def getTestImageRepo: String = {
-    val imageRepo = System.getProperty("spark.kubernetes.test.imageRepo")
+    val imageRepo = System.getProperty(CONFIG_KEY_IMAGE_REPO)
     require(imageRepo != null, "Image repo must be provided in system 
properties.")
     imageRepo
   }

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConstants.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConstants.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConstants.scala
index 8595d0e..eeae70c 100644
--- 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConstants.scala
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/TestConstants.scala
@@ -17,6 +17,17 @@
 package org.apache.spark.deploy.k8s.integrationtest
 
 object TestConstants {
-  val MINIKUBE_TEST_BACKEND = "minikube"
-  val GCE_TEST_BACKEND = "gce"
+  val BACKEND_MINIKUBE = "minikube"
+  val BACKEND_DOCKER_FOR_DESKTOP = "docker-for-desktop"
+  val BACKEND_CLOUD = "cloud"
+
+  val CONFIG_KEY_DEPLOY_MODE = "spark.kubernetes.test.deployMode"
+  val CONFIG_KEY_KUBE_CONFIG_CONTEXT = 
"spark.kubernetes.test.kubeConfigContext"
+  val CONFIG_KEY_KUBE_MASTER_URL = "spark.kubernetes.test.master"
+  val CONFIG_KEY_KUBE_NAMESPACE = "spark.kubernetes.test.namespace"
+  val CONFIG_KEY_KUBE_SVC_ACCOUNT = "spark.kubernetes.test.serviceAccountName"
+  val CONFIG_KEY_IMAGE_TAG = "spark.kubernetes.test.imageTagF"
+  val CONFIG_KEY_IMAGE_TAG_FILE = "spark.kubernetes.test.imageTagFile"
+  val CONFIG_KEY_IMAGE_REPO = "spark.kubernetes.test.imageRepo"
+  val CONFIG_KEY_UNPACK_DIR = "spark.kubernetes.test.unpackSparkDir"
 }

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/IntegrationTestBackend.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/IntegrationTestBackend.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/IntegrationTestBackend.scala
index 284712c..7bf324c 100644
--- 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/IntegrationTestBackend.scala
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/IntegrationTestBackend.scala
@@ -18,7 +18,9 @@
 package org.apache.spark.deploy.k8s.integrationtest.backend
 
 import io.fabric8.kubernetes.client.DefaultKubernetesClient
-
+import org.apache.spark.deploy.k8s.integrationtest.TestConstants._
+import 
org.apache.spark.deploy.k8s.integrationtest.backend.cloud.KubeConfigBackend
+import 
org.apache.spark.deploy.k8s.integrationtest.backend.docker.DockerForDesktopBackend
 import 
org.apache.spark.deploy.k8s.integrationtest.backend.minikube.MinikubeTestBackend
 
 private[spark] trait IntegrationTestBackend {
@@ -28,16 +30,15 @@ private[spark] trait IntegrationTestBackend {
 }
 
 private[spark] object IntegrationTestBackendFactory {
-  val deployModeConfigKey = "spark.kubernetes.test.deployMode"
-
   def getTestBackend: IntegrationTestBackend = {
-    val deployMode = Option(System.getProperty(deployModeConfigKey))
-      .getOrElse("minikube")
-    if (deployMode == "minikube") {
-      MinikubeTestBackend
-    } else {
-      throw new IllegalArgumentException(
-        "Invalid " + deployModeConfigKey + ": " + deployMode)
+    val deployMode = Option(System.getProperty(CONFIG_KEY_DEPLOY_MODE))
+      .getOrElse(BACKEND_MINIKUBE)
+    deployMode match {
+      case BACKEND_MINIKUBE => MinikubeTestBackend
+      case BACKEND_CLOUD => new 
KubeConfigBackend(System.getProperty(CONFIG_KEY_KUBE_CONFIG_CONTEXT))
+      case BACKEND_DOCKER_FOR_DESKTOP => DockerForDesktopBackend
+      case _ => throw new IllegalArgumentException("Invalid " +
+        CONFIG_KEY_DEPLOY_MODE + ": " + deployMode)
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/cloud/KubeConfigBackend.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/cloud/KubeConfigBackend.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/cloud/KubeConfigBackend.scala
new file mode 100644
index 0000000..333526b
--- /dev/null
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/cloud/KubeConfigBackend.scala
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.integrationtest.backend.cloud
+
+import java.nio.file.Paths
+
+import io.fabric8.kubernetes.client.utils.Utils
+import io.fabric8.kubernetes.client.{Config, DefaultKubernetesClient}
+import org.apache.commons.lang3.StringUtils
+import org.apache.spark.deploy.k8s.integrationtest.TestConstants
+import 
org.apache.spark.deploy.k8s.integrationtest.backend.IntegrationTestBackend
+import org.apache.spark.internal.Logging
+import org.apache.spark.util.Utils.checkAndGetK8sMasterUrl
+
+private[spark] class KubeConfigBackend(var context: String)
+  extends IntegrationTestBackend with Logging {
+  logInfo(s"K8S Integration tests will run against " +
+    s"${if (context != null) s"context ${context}" else "default context"}" +
+    s" from users K8S config file")
+
+  private var defaultClient: DefaultKubernetesClient = _
+
+  override def initialize(): Unit = {
+    // Auto-configure K8S client from K8S config file
+    if (Utils.getSystemPropertyOrEnvVar(Config.KUBERNETES_KUBECONFIG_FILE, 
null: String) == null) {
+      // Fabric 8 client will automatically assume a default location in this 
case
+      logWarning(s"No explicit KUBECONFIG specified, will assume .kube/config 
under your home directory")
+    }
+    val config = Config.autoConfigure(context)
+
+    // If an explicit master URL was specified then override that detected 
from the
+    // K8S config if it is different
+    var masterUrl = 
Option(System.getProperty(TestConstants.CONFIG_KEY_KUBE_MASTER_URL))
+      .getOrElse(null)
+    if (StringUtils.isNotBlank(masterUrl)) {
+      // Clean up master URL which would have been specified in Spark format 
into a normal
+      // K8S master URL
+      masterUrl = checkAndGetK8sMasterUrl(masterUrl).replaceFirst("k8s://", "")
+      if (!StringUtils.equals(config.getMasterUrl, masterUrl)) {
+        logInfo(s"Overriding K8S master URL ${config.getMasterUrl} from K8S 
config file " +
+          s"with user specified master URL ${masterUrl}")
+        config.setMasterUrl(masterUrl)
+      }
+    }
+
+    defaultClient = new DefaultKubernetesClient(config)
+  }
+
+  override def cleanUp(): Unit = {
+    super.cleanUp()
+  }
+
+  override def getKubernetesClient: DefaultKubernetesClient = {
+    defaultClient
+  }
+}

http://git-wip-us.apache.org/repos/asf/spark/blob/fc822229/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/docker/DockerForDesktopBackend.scala
----------------------------------------------------------------------
diff --git 
a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/docker/DockerForDesktopBackend.scala
 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/docker/DockerForDesktopBackend.scala
new file mode 100644
index 0000000..81a11ae
--- /dev/null
+++ 
b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/docker/DockerForDesktopBackend.scala
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.integrationtest.backend.docker
+
+import org.apache.spark.deploy.k8s.integrationtest.TestConstants
+import 
org.apache.spark.deploy.k8s.integrationtest.backend.cloud.KubeConfigBackend
+
+private[spark] object DockerForDesktopBackend
+  extends KubeConfigBackend(TestConstants.BACKEND_DOCKER_FOR_DESKTOP) {
+
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to