spark git commit: [SPARK-24428][K8S] Fix unused code

2018-07-02 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master 42815548c -> 85fe1297e


[SPARK-24428][K8S] Fix unused code

## What changes were proposed in this pull request?

Remove code that is misleading and is a leftover from a previous implementation.

## How was this patch tested?
Manually.

Author: Stavros Kontopoulos 

Closes #21462 from skonto/fix-k8s-docs.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/85fe1297
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/85fe1297
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/85fe1297

Branch: refs/heads/master
Commit: 85fe1297e35bcff9cf86bd53fee615e140ee5bfb
Parents: 4281554
Author: Stavros Kontopoulos 
Authored: Mon Jul 2 13:08:16 2018 -0700
Committer: Anirudh Ramanathan 
Committed: Mon Jul 2 13:08:16 2018 -0700

--
 .../scala/org/apache/spark/deploy/k8s/Constants.scala   |  6 --
 .../cluster/k8s/KubernetesClusterManager.scala  |  2 --
 .../docker/src/main/dockerfiles/spark/entrypoint.sh | 12 +---
 3 files changed, 5 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/85fe1297/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala
index 69bd03d..5ecdd3a 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala
@@ -25,9 +25,6 @@ private[spark] object Constants {
   val SPARK_POD_DRIVER_ROLE = "driver"
   val SPARK_POD_EXECUTOR_ROLE = "executor"
 
-  // Annotations
-  val SPARK_APP_NAME_ANNOTATION = "spark-app-name"
-
   // Credentials secrets
   val DRIVER_CREDENTIALS_SECRETS_BASE_DIR =
 "/mnt/secrets/spark-kubernetes-credentials"
@@ -50,17 +47,14 @@ private[spark] object Constants {
   val DEFAULT_BLOCKMANAGER_PORT = 7079
   val DRIVER_PORT_NAME = "driver-rpc-port"
   val BLOCK_MANAGER_PORT_NAME = "blockmanager"
-  val EXECUTOR_PORT_NAME = "executor"
 
   // Environment Variables
-  val ENV_EXECUTOR_PORT = "SPARK_EXECUTOR_PORT"
   val ENV_DRIVER_URL = "SPARK_DRIVER_URL"
   val ENV_EXECUTOR_CORES = "SPARK_EXECUTOR_CORES"
   val ENV_EXECUTOR_MEMORY = "SPARK_EXECUTOR_MEMORY"
   val ENV_APPLICATION_ID = "SPARK_APPLICATION_ID"
   val ENV_EXECUTOR_ID = "SPARK_EXECUTOR_ID"
   val ENV_EXECUTOR_POD_IP = "SPARK_EXECUTOR_POD_IP"
-  val ENV_MOUNTED_CLASSPATH = "SPARK_MOUNTED_CLASSPATH"
   val ENV_JAVA_OPT_PREFIX = "SPARK_JAVA_OPT_"
   val ENV_CLASSPATH = "SPARK_CLASSPATH"
   val ENV_DRIVER_BIND_ADDRESS = "SPARK_DRIVER_BIND_ADDRESS"

http://git-wip-us.apache.org/repos/asf/spark/blob/85fe1297/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
index c6e931a..de2a52b 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
@@ -48,8 +48,6 @@ private[spark] class KubernetesClusterManager extends 
ExternalClusterManager wit
   sc: SparkContext,
   masterURL: String,
   scheduler: TaskScheduler): SchedulerBackend = {
-val executorSecretNamesToMountPaths = 
KubernetesUtils.parsePrefixedKeyValuePairs(
-  sc.conf, KUBERNETES_EXECUTOR_SECRETS_PREFIX)
 val kubernetesClient = SparkKubernetesClientFactory.createKubernetesClient(
   KUBERNETES_MASTER_INTERNAL_URL,
   Some(sc.conf.get(KUBERNETES_NAMESPACE)),

http://git-wip-us.apache.org/repos/asf/spark/blob/85fe1297/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh
--
diff --git 
a/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh 
b/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh
index 2f4e115..8bdb0f7 100755
--- 
a/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh
+++ 
b/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh
@@ -51,12 +51,10 @@ esac
 
 

spark git commit: [SPARK-24547][K8S] Allow for building spark on k8s docker images without cache and don't forget to push spark-py container.

2018-06-20 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master 3f4bda728 -> 15747cfd3


[SPARK-24547][K8S] Allow for building spark on k8s docker images without cache 
and don't forget to push spark-py container.

## What changes were proposed in this pull request?

https://issues.apache.org/jira/browse/SPARK-24547

TL;DR from JIRA issue:

- First time I generated images for 2.4.0 Docker was using it's cache, so 
actually when running jobs, old jars where still in the Docker image. This 
produces errors like this in the executors:

`java.io.InvalidClassException: org.apache.spark.storage.BlockManagerId; local 
class incompatible: stream classdesc serialVersionUID = 6155820641931972169, 
local class serialVersionUID = -3720498261147521051`

- The second problem was that the spark container is pushed, but the spark-py 
container wasn't yet. This was just forgotten in the initial PR.

- A third problem I also ran into because I had an older docker was 
https://github.com/apache/spark/pull/21551 so I have not included a fix for 
that in this ticket.

## How was this patch tested?

I've tested it on my own Spark on k8s deployment.

Author: Ray Burgemeestre 

Closes #21555 from rayburgemeestre/SPARK-24547.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/15747cfd
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/15747cfd
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/15747cfd

Branch: refs/heads/master
Commit: 15747cfd3246385ffb23e19e28d2e4effa710bf6
Parents: 3f4bda7
Author: Ray Burgemeestre 
Authored: Wed Jun 20 17:09:37 2018 -0700
Committer: Anirudh Ramanathan 
Committed: Wed Jun 20 17:09:37 2018 -0700

--
 bin/docker-image-tool.sh | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/15747cfd/bin/docker-image-tool.sh
--
diff --git a/bin/docker-image-tool.sh b/bin/docker-image-tool.sh
index a871ab5..a3f1bcf 100755
--- a/bin/docker-image-tool.sh
+++ b/bin/docker-image-tool.sh
@@ -70,17 +70,18 @@ function build {
   local BASEDOCKERFILE=${BASEDOCKERFILE:-"$IMG_PATH/spark/Dockerfile"}
   local 
PYDOCKERFILE=${PYDOCKERFILE:-"$IMG_PATH/spark/bindings/python/Dockerfile"}
 
-  docker build "${BUILD_ARGS[@]}" \
+  docker build $NOCACHEARG "${BUILD_ARGS[@]}" \
 -t $(image_ref spark) \
 -f "$BASEDOCKERFILE" .
 
-docker build "${BINDING_BUILD_ARGS[@]}" \
+  docker build $NOCACHEARG "${BINDING_BUILD_ARGS[@]}" \
 -t $(image_ref spark-py) \
 -f "$PYDOCKERFILE" .
 }
 
 function push {
   docker push "$(image_ref spark)"
+  docker push "$(image_ref spark-py)"
 }
 
 function usage {
@@ -99,6 +100,7 @@ Options:
   -r repo Repository address.
   -t tag  Tag to apply to the built image, or to identify the image to be 
pushed.
   -m  Use minikube's Docker daemon.
+  -n  Build docker image with --no-cache
 
 Using minikube when building images will do so directly into minikube's Docker 
daemon.
 There is no need to push the images into minikube in that case, they'll be 
automatically
@@ -127,7 +129,8 @@ REPO=
 TAG=
 BASEDOCKERFILE=
 PYDOCKERFILE=
-while getopts f:mr:t: option
+NOCACHEARG=
+while getopts f:mr:t:n option
 do
  case "${option}"
  in
@@ -135,6 +138,7 @@ do
  p) PYDOCKERFILE=${OPTARG};;
  r) REPO=${OPTARG};;
  t) TAG=${OPTARG};;
+ n) NOCACHEARG="--no-cache";;
  m)
if ! which minikube 1>/dev/null; then
  error "Cannot find minikube."


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-24232][K8S] Add support for secret env vars

2018-05-31 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master cc976f6cb -> 21e1fc7d4


[SPARK-24232][K8S] Add support for secret env vars

## What changes were proposed in this pull request?

* Allows to refer a secret as an env var.
* Introduces new config properties in the form: 
spark.kubernetes{driver,executor}.secretKeyRef.ENV_NAME=name:key
  ENV_NAME is case sensitive.

* Updates docs.
* Adds required unit tests.

## How was this patch tested?
Manually tested and confirmed that the secrets exist in driver's and executor's 
container env.
Also job finished successfully.
First created a secret with the following yaml:
```
apiVersion: v1
kind: Secret
metadata:
  name: test-secret
data:
  username: c3RhdnJvcwo=
  password: Mzk1MjgkdmRnN0pi

---

$ echo -n 'stavros' | base64
c3RhdnJvcw==
$ echo -n '39528$vdg7Jb' | base64
MWYyZDFlMmU2N2Rm
```
Run a job as follows:
```./bin/spark-submit \
  --master k8s://http://localhost:9000 \
  --deploy-mode cluster \
  --name spark-pi \
  --class org.apache.spark.examples.SparkPi \
  --conf spark.executor.instances=1 \
  --conf spark.kubernetes.container.image=skonto/spark:k8envs3 \
  --conf 
spark.kubernetes.driver.secretKeyRef.MY_USERNAME=test-secret:username \
  --conf 
spark.kubernetes.driver.secretKeyRef.My_password=test-secret:password \
  --conf 
spark.kubernetes.executor.secretKeyRef.MY_USERNAME=test-secret:username \
  --conf 
spark.kubernetes.executor.secretKeyRef.My_password=test-secret:password \
  local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0-SNAPSHOT.jar 
1
```

Secret loaded correctly at the driver container:
![image](https://user-images.githubusercontent.com/7945591/40174346-7fee70c8-59dd-11e8-8705-995a5472716f.png)

Also if I log into the exec container:

kubectl exec -it spark-pi-1526555613156-exec-1 bash
bash-4.4# env

> SPARK_EXECUTOR_MEMORY=1g
> SPARK_EXECUTOR_CORES=1
> LANG=C.UTF-8
> HOSTNAME=spark-pi-1526555613156-exec-1
> SPARK_APPLICATION_ID=spark-application-1526555618626
> **MY_USERNAME=stavros**
>
> JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk
> KUBERNETES_PORT_443_TCP_PROTO=tcp
> KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
> JAVA_VERSION=8u151
> KUBERNETES_PORT=tcp://10.100.0.1:443
> PWD=/opt/spark/work-dir
> HOME=/root
> SPARK_LOCAL_DIRS=/var/data/spark-b569b0ae-b7ef-4f91-bcd5-0f55535d3564
> KUBERNETES_SERVICE_PORT_HTTPS=443
> KUBERNETES_PORT_443_TCP_PORT=443
> SPARK_HOME=/opt/spark
> SPARK_DRIVER_URL=spark://CoarseGrainedSchedulerspark-pi-1526555613156-driver-svc.default.svc:7078
> KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
> SPARK_EXECUTOR_POD_IP=9.0.9.77
> TERM=xterm
> SPARK_EXECUTOR_ID=1
> SHLVL=1
> KUBERNETES_SERVICE_PORT=443
> SPARK_CONF_DIR=/opt/spark/conf
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin
> JAVA_ALPINE_VERSION=8.151.12-r0
> KUBERNETES_SERVICE_HOST=10.100.0.1
> **My_password=39528$vdg7Jb**
> _=/usr/bin/env
>

Author: Stavros Kontopoulos 

Closes #21317 from skonto/k8s-fix-env-secrets.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/21e1fc7d
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/21e1fc7d
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/21e1fc7d

Branch: refs/heads/master
Commit: 21e1fc7d4aed688d7b685be6ce93f76752159c98
Parents: cc976f6
Author: Stavros Kontopoulos 
Authored: Thu May 31 14:28:33 2018 -0700
Committer: Anirudh Ramanathan 
Committed: Thu May 31 14:28:33 2018 -0700

--
 docs/running-on-kubernetes.md   | 22 
 .../org/apache/spark/deploy/k8s/Config.scala|  2 +
 .../spark/deploy/k8s/KubernetesConf.scala   | 11 +++-
 .../k8s/features/EnvSecretsFeatureStep.scala| 57 +++
 .../k8s/submit/KubernetesDriverBuilder.scala| 11 +++-
 .../cluster/k8s/KubernetesExecutorBuilder.scala | 12 +++-
 .../spark/deploy/k8s/KubernetesConfSuite.scala  | 12 +++-
 .../features/BasicDriverFeatureStepSuite.scala  |  2 +
 .../BasicExecutorFeatureStepSuite.scala |  3 +
 ...rKubernetesCredentialsFeatureStepSuite.scala |  3 +
 .../DriverServiceFeatureStepSuite.scala |  6 ++
 .../features/EnvSecretsFeatureStepSuite.scala   | 59 
 .../features/KubernetesFeaturesTestUtils.scala  |  7 ++-
 .../features/LocalDirsFeatureStepSuite.scala|  1 +
 .../features/MountSecretsFeatureStepSuite.scala |  1 +
 .../spark/deploy/k8s/submit/ClientSuite.scala   |  1 +
 .../submit/KubernetesDriverBuilderSuite.scala   | 13 -
 .../k8s/KubernetesExecutorBuilderSuite.scala| 11 +++-
 18 files changed, 222 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/21e1fc7d/docs/running-on-kubernetes.md
--

spark git commit: [SPARK-24137][K8S] Mount local directories as empty dir volumes.

2018-05-10 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master f4fed0512 -> 6282fc64e


[SPARK-24137][K8S] Mount local directories as empty dir volumes.

## What changes were proposed in this pull request?

Drastically improves performance and won't cause Spark applications to fail 
because they write too much data to the Docker image's specific file system. 
The file system's directories that back emptydir volumes are generally larger 
and more performant.

## How was this patch tested?

Has been in use via the prototype version of Kubernetes support, but lost in 
the transition to here.

Author: mcheah 

Closes #21238 from mccheah/mount-local-dirs.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6282fc64
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/6282fc64
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/6282fc64

Branch: refs/heads/master
Commit: 6282fc64e32fc2f70e79ace14efd4922e4535dbb
Parents: f4fed05
Author: mcheah 
Authored: Thu May 10 11:36:41 2018 -0700
Committer: Anirudh Ramanathan 
Committed: Thu May 10 11:36:41 2018 -0700

--
 .../main/scala/org/apache/spark/SparkConf.scala |   5 +-
 .../k8s/features/LocalDirsFeatureStep.scala |  77 +
 .../k8s/submit/KubernetesDriverBuilder.scala|  10 +-
 .../cluster/k8s/KubernetesExecutorBuilder.scala |   9 +-
 .../features/LocalDirsFeatureStepSuite.scala| 111 +++
 .../submit/KubernetesDriverBuilderSuite.scala   |  13 ++-
 .../k8s/KubernetesExecutorBuilderSuite.scala|  12 +-
 7 files changed, 223 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/6282fc64/core/src/main/scala/org/apache/spark/SparkConf.scala
--
diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 129956e..dab4095 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -454,8 +454,9 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
*/
   private[spark] def validateSettings() {
 if (contains("spark.local.dir")) {
-  val msg = "In Spark 1.0 and later spark.local.dir will be overridden by 
the value set by " +
-"the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and 
LOCAL_DIRS in YARN)."
+  val msg = "Note that spark.local.dir will be overridden by the value set 
by " +
+"the cluster manager (via SPARK_LOCAL_DIRS in 
mesos/standalone/kubernetes and LOCAL_DIRS" +
+" in YARN)."
   logWarning(msg)
 }
 

http://git-wip-us.apache.org/repos/asf/spark/blob/6282fc64/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
new file mode 100644
index 000..70b3073
--- /dev/null
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import java.nio.file.Paths
+import java.util.UUID
+
+import io.fabric8.kubernetes.api.model.{ContainerBuilder, HasMetadata, 
PodBuilder, VolumeBuilder, VolumeMountBuilder}
+
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesDriverSpecificConf, KubernetesRoleSpecificConf, SparkPod}
+
+private[spark] class LocalDirsFeatureStep(
+conf: KubernetesConf[_ <: KubernetesRoleSpecificConf],
+defaultLocalDir: String = s"/var/data/spark-${UUID.randomUUID}")
+  extends KubernetesFeatureConfigStep {
+
+  // 

[1/3] spark git commit: [SPARK-22839][K8S] Refactor to unify driver and executor pod builder APIs

2018-04-13 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master 0323e6146 -> a83ae0d9b


http://git-wip-us.apache.org/repos/asf/spark/blob/a83ae0d9/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountSecretsFeatureStepSuite.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountSecretsFeatureStepSuite.scala
 
b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountSecretsFeatureStepSuite.scala
new file mode 100644
index 000..9d02f56
--- /dev/null
+++ 
b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountSecretsFeatureStepSuite.scala
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import io.fabric8.kubernetes.api.model.PodBuilder
+
+import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesExecutorSpecificConf, SecretVolumeUtils, SparkPod}
+
+class MountSecretsFeatureStepSuite extends SparkFunSuite {
+
+  private val SECRET_FOO = "foo"
+  private val SECRET_BAR = "bar"
+  private val SECRET_MOUNT_PATH = "/etc/secrets/driver"
+
+  test("mounts all given secrets") {
+val baseDriverPod = SparkPod.initialPod()
+val secretNamesToMountPaths = Map(
+  SECRET_FOO -> SECRET_MOUNT_PATH,
+  SECRET_BAR -> SECRET_MOUNT_PATH)
+val sparkConf = new SparkConf(false)
+val kubernetesConf = KubernetesConf(
+  sparkConf,
+  KubernetesExecutorSpecificConf("1", new PodBuilder().build()),
+  "resource-name-prefix",
+  "app-id",
+  Map.empty,
+  Map.empty,
+  secretNamesToMountPaths,
+  Map.empty)
+
+val step = new MountSecretsFeatureStep(kubernetesConf)
+val driverPodWithSecretsMounted = step.configurePod(baseDriverPod).pod
+val driverContainerWithSecretsMounted = 
step.configurePod(baseDriverPod).container
+
+Seq(s"$SECRET_FOO-volume", s"$SECRET_BAR-volume").foreach { volumeName =>
+  assert(SecretVolumeUtils.podHasVolume(driverPodWithSecretsMounted, 
volumeName))
+}
+Seq(s"$SECRET_FOO-volume", s"$SECRET_BAR-volume").foreach { volumeName =>
+  assert(SecretVolumeUtils.containerHasVolume(
+driverContainerWithSecretsMounted, volumeName, SECRET_MOUNT_PATH))
+}
+  }
+}

http://git-wip-us.apache.org/repos/asf/spark/blob/a83ae0d9/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/submit/ClientSuite.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/submit/ClientSuite.scala
 
b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/submit/ClientSuite.scala
index 6a50159..c1b203e 100644
--- 
a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/submit/ClientSuite.scala
+++ 
b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/submit/ClientSuite.scala
@@ -16,22 +16,17 @@
  */
 package org.apache.spark.deploy.k8s.submit
 
-import scala.collection.JavaConverters._
-
-import com.google.common.collect.Iterables
 import io.fabric8.kubernetes.api.model._
 import io.fabric8.kubernetes.client.{KubernetesClient, Watch}
 import io.fabric8.kubernetes.client.dsl.{MixedOperation, 
NamespaceListVisitFromServerGetDeleteRecreateWaitApplicable, PodResource}
 import org.mockito.{ArgumentCaptor, Mock, MockitoAnnotations}
 import org.mockito.Mockito.{doReturn, verify, when}
-import org.mockito.invocation.InvocationOnMock
-import org.mockito.stubbing.Answer
 import org.scalatest.BeforeAndAfter
 import org.scalatest.mockito.MockitoSugar._
 
 import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesDriverSpec, 
KubernetesDriverSpecificConf, SparkPod}
 import org.apache.spark.deploy.k8s.Constants._
-import org.apache.spark.deploy.k8s.submit.steps.DriverConfigurationStep
 
 class ClientSuite extends SparkFunSuite with BeforeAndAfter {
 
@@ -39,6 +34,74 @@ class ClientSuite extends 

[2/3] spark git commit: [SPARK-22839][K8S] Refactor to unify driver and executor pod builder APIs

2018-04-13 Thread foxish
http://git-wip-us.apache.org/repos/asf/spark/blob/a83ae0d9/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DependencyResolutionStep.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DependencyResolutionStep.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DependencyResolutionStep.scala
deleted file mode 100644
index 43de329..000
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DependencyResolutionStep.scala
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.spark.deploy.k8s.submit.steps
-
-import java.io.File
-
-import io.fabric8.kubernetes.api.model.ContainerBuilder
-
-import org.apache.spark.deploy.k8s.Constants._
-import org.apache.spark.deploy.k8s.KubernetesUtils
-import org.apache.spark.deploy.k8s.submit.KubernetesDriverSpec
-
-/**
- * Step that configures the classpath, spark.jars, and spark.files for the 
driver given that the
- * user may provide remote files or files with local:// schemes.
- */
-private[spark] class DependencyResolutionStep(
-sparkJars: Seq[String],
-sparkFiles: Seq[String]) extends DriverConfigurationStep {
-
-  override def configureDriver(driverSpec: KubernetesDriverSpec): 
KubernetesDriverSpec = {
-val resolvedSparkJars = KubernetesUtils.resolveFileUrisAndPath(sparkJars)
-val resolvedSparkFiles = KubernetesUtils.resolveFileUrisAndPath(sparkFiles)
-
-val sparkConf = driverSpec.driverSparkConf.clone()
-if (resolvedSparkJars.nonEmpty) {
-  sparkConf.set("spark.jars", resolvedSparkJars.mkString(","))
-}
-if (resolvedSparkFiles.nonEmpty) {
-  sparkConf.set("spark.files", resolvedSparkFiles.mkString(","))
-}
-val resolvedDriverContainer = if (resolvedSparkJars.nonEmpty) {
-  new ContainerBuilder(driverSpec.driverContainer)
-.addNewEnv()
-  .withName(ENV_MOUNTED_CLASSPATH)
-  .withValue(resolvedSparkJars.mkString(File.pathSeparator))
-  .endEnv()
-.build()
-} else {
-  driverSpec.driverContainer
-}
-
-driverSpec.copy(
-  driverContainer = resolvedDriverContainer,
-  driverSparkConf = sparkConf)
-  }
-}

http://git-wip-us.apache.org/repos/asf/spark/blob/a83ae0d9/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DriverConfigurationStep.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DriverConfigurationStep.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DriverConfigurationStep.scala
deleted file mode 100644
index 17614e0..000
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/DriverConfigurationStep.scala
+++ /dev/null
@@ -1,30 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.spark.deploy.k8s.submit.steps
-
-import org.apache.spark.deploy.k8s.submit.KubernetesDriverSpec
-
-/**
- * Represents a step in configuring the Spark driver pod.
- */
-private[spark] trait DriverConfigurationStep {
-
-  /**
-   * Apply some transformation to the previous state 

[3/3] spark git commit: [SPARK-22839][K8S] Refactor to unify driver and executor pod builder APIs

2018-04-13 Thread foxish
[SPARK-22839][K8S] Refactor to unify driver and executor pod builder APIs

## What changes were proposed in this pull request?

Breaks down the construction of driver pods and executor pods in a way that 
uses a common abstraction for both spark-submit creating the driver and 
KubernetesClusterSchedulerBackend creating the executor. Encourages more code 
reuse and is more legible than the older approach.

The high-level design is discussed in more detail on the JIRA ticket. This pull 
request is the implementation of that design with some minor changes in the 
implementation details.

No user-facing behavior should break as a result of this change.

## How was this patch tested?

Migrated all unit tests from the old submission steps architecture to the new 
architecture. Integration tests should not have to change and pass given that 
this shouldn't change any outward behavior.

Author: mcheah 

Closes #20910 from mccheah/spark-22839-incremental.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a83ae0d9
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a83ae0d9
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a83ae0d9

Branch: refs/heads/master
Commit: a83ae0d9bc1b8f4909b9338370efe4020079bea7
Parents: 0323e61
Author: mcheah 
Authored: Fri Apr 13 08:43:58 2018 -0700
Committer: Anirudh Ramanathan 
Committed: Fri Apr 13 08:43:58 2018 -0700

--
 .../org/apache/spark/deploy/k8s/Config.scala|   2 +-
 .../spark/deploy/k8s/KubernetesConf.scala   | 184 ++
 .../spark/deploy/k8s/KubernetesDriverSpec.scala |  31 +++
 .../spark/deploy/k8s/KubernetesUtils.scala  |  11 -
 .../deploy/k8s/MountSecretsBootstrap.scala  |  72 --
 .../org/apache/spark/deploy/k8s/SparkPod.scala  |  34 +++
 .../k8s/features/BasicDriverFeatureStep.scala   | 136 ++
 .../k8s/features/BasicExecutorFeatureStep.scala | 179 ++
 ...DriverKubernetesCredentialsFeatureStep.scala | 216 
 .../k8s/features/DriverServiceFeatureStep.scala |  97 
 .../features/KubernetesFeatureConfigStep.scala  |  71 ++
 .../k8s/features/MountSecretsFeatureStep.scala  |  62 +
 .../k8s/submit/DriverConfigOrchestrator.scala   | 145 ---
 .../submit/KubernetesClientApplication.scala|  80 +++---
 .../k8s/submit/KubernetesDriverBuilder.scala|  56 +
 .../k8s/submit/KubernetesDriverSpec.scala   |  47 
 .../steps/BasicDriverConfigurationStep.scala| 163 
 .../submit/steps/DependencyResolutionStep.scala |  61 -
 .../submit/steps/DriverConfigurationStep.scala  |  30 ---
 .../steps/DriverKubernetesCredentialsStep.scala | 245 ---
 .../submit/steps/DriverMountSecretsStep.scala   |  38 ---
 .../steps/DriverServiceBootstrapStep.scala  | 104 
 .../cluster/k8s/ExecutorPodFactory.scala| 227 -
 .../cluster/k8s/KubernetesClusterManager.scala  |  12 +-
 .../k8s/KubernetesClusterSchedulerBackend.scala |  20 +-
 .../cluster/k8s/KubernetesExecutorBuilder.scala |  41 
 .../spark/deploy/k8s/KubernetesConfSuite.scala  | 175 +
 .../spark/deploy/k8s/KubernetesUtilsTest.scala  |  36 ---
 .../features/BasicDriverFeatureStepSuite.scala  | 153 
 .../BasicExecutorFeatureStepSuite.scala | 179 ++
 ...rKubernetesCredentialsFeatureStepSuite.scala | 174 +
 .../DriverServiceFeatureStepSuite.scala | 227 +
 .../features/KubernetesFeaturesTestUtils.scala  |  61 +
 .../features/MountSecretsFeatureStepSuite.scala |  58 +
 .../spark/deploy/k8s/submit/ClientSuite.scala   | 216 
 .../submit/DriverConfigOrchestratorSuite.scala  | 131 --
 .../submit/KubernetesDriverBuilderSuite.scala   | 102 
 .../BasicDriverConfigurationStepSuite.scala | 122 -
 .../steps/DependencyResolutionStepSuite.scala   |  69 --
 .../DriverKubernetesCredentialsStepSuite.scala  | 153 
 .../steps/DriverMountSecretsStepSuite.scala |  49 
 .../steps/DriverServiceBootstrapStepSuite.scala | 180 --
 .../cluster/k8s/ExecutorPodFactorySuite.scala   | 195 ---
 ...KubernetesClusterSchedulerBackendSuite.scala |  37 +--
 .../k8s/KubernetesExecutorBuilderSuite.scala|  75 ++
 45 files changed, 2482 insertions(+), 2274 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/a83ae0d9/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 

spark git commit: [SPARK-23668][K8S] Add config option for passing through k8s Pod.spec.imagePullSecrets

2018-04-04 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master a35523653 -> cccaaa14a


[SPARK-23668][K8S] Add config option for passing through k8s 
Pod.spec.imagePullSecrets

## What changes were proposed in this pull request?

Pass through the `imagePullSecrets` option to the k8s pod in order to allow 
user to access private image registries.

See 
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

## How was this patch tested?

Unit tests + manual testing.

Manual testing procedure:
1. Have private image registry.
2. Spark-submit application with no `spark.kubernetes.imagePullSecret` set. Do 
`kubectl describe pod ...`. See the error message:
```
Error syncing pod, skipping: failed to "StartContainer" for 
"spark-kubernetes-driver" with ErrImagePull: "rpc error: code = 2 desc = Error: 
Status 400 trying to pull repository ...: \"{\\n  \\\"errors\\\" : [ {\\n
\\\"status\\\" : 400,\\n\\\"message\\\" : \\\"Unsupported docker v1 
repository request for '...'\\\"\\n  } ]\\n}\""
```
3. Create secret `kubectl create secret docker-registry ...`
4. Spark-submit with `spark.kubernetes.imagePullSecret` set to the new secret. 
See that deployment was successful.

Author: Andrew Korzhuev 
Author: Andrew Korzhuev 

Closes #20811 from andrusha/spark-23668-image-pull-secrets.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cccaaa14
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/cccaaa14
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/cccaaa14

Branch: refs/heads/master
Commit: cccaaa14ad775fb981e501452ba2cc06ff5c0f0a
Parents: a355236
Author: Andrew Korzhuev 
Authored: Wed Apr 4 12:30:52 2018 -0700
Committer: Anirudh Ramanathan 
Committed: Wed Apr 4 12:30:52 2018 -0700

--
 .../org/apache/spark/deploy/k8s/Config.scala|  7 
 .../spark/deploy/k8s/KubernetesUtils.scala  | 13 +++
 .../steps/BasicDriverConfigurationStep.scala|  7 +++-
 .../cluster/k8s/ExecutorPodFactory.scala|  4 +++
 .../spark/deploy/k8s/KubernetesUtilsTest.scala  | 36 
 .../BasicDriverConfigurationStepSuite.scala |  8 -
 .../cluster/k8s/ExecutorPodFactorySuite.scala   |  5 +++
 7 files changed, 78 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/cccaaa14/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
index 405ea47..82f6c71 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
@@ -54,6 +54,13 @@ private[spark] object Config extends Logging {
   .checkValues(Set("Always", "Never", "IfNotPresent"))
   .createWithDefault("IfNotPresent")
 
+  val IMAGE_PULL_SECRETS =
+ConfigBuilder("spark.kubernetes.container.image.pullSecrets")
+  .doc("Comma separated list of the Kubernetes secrets used " +
+"to access private image registries.")
+  .stringConf
+  .createOptional
+
   val KUBERNETES_AUTH_DRIVER_CONF_PREFIX =
   "spark.kubernetes.authenticate.driver"
   val KUBERNETES_AUTH_DRIVER_MOUNTED_CONF_PREFIX =

http://git-wip-us.apache.org/repos/asf/spark/blob/cccaaa14/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
index 5bc0701..5b2bb81 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
@@ -16,6 +16,8 @@
  */
 package org.apache.spark.deploy.k8s
 
+import io.fabric8.kubernetes.api.model.LocalObjectReference
+
 import org.apache.spark.SparkConf
 import org.apache.spark.util.Utils
 
@@ -35,6 +37,17 @@ private[spark] object KubernetesUtils {
 sparkConf.getAllWithPrefix(prefix).toMap
   }
 
+  /**
+   * Parses comma-separated list of imagePullSecrets into K8s-understandable 
format
+   */
+  def parseImagePullSecrets(imagePullSecrets: Option[String]): 
List[LocalObjectReference] = {
+imagePullSecrets match {
+  case 

spark git commit: [SPARK-23285][K8S] Add a config property for specifying physical executor cores

2018-04-02 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master 6151f29f9 -> fe2b7a456


[SPARK-23285][K8S] Add a config property for specifying physical executor cores

## What changes were proposed in this pull request?

As mentioned in SPARK-23285, this PR introduces a new configuration property 
`spark.kubernetes.executor.cores` for specifying the physical CPU cores 
requested for each executor pod. This is to avoid changing the semantics of 
`spark.executor.cores` and `spark.task.cpus` and their role in task scheduling, 
task parallelism, dynamic resource allocation, etc. The new configuration 
property only determines the physical CPU cores available to an executor. An 
executor can still run multiple tasks simultaneously by using appropriate 
values for `spark.executor.cores` and `spark.task.cpus`.

## How was this patch tested?

Unit tests.

felixcheung srowen jiangxb1987 jerryshao mccheah foxish

Author: Yinan Li <y...@google.com>
Author: Yinan Li <liyinan...@gmail.com>

Closes #20553 from liyinan926/master.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/fe2b7a45
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/fe2b7a45
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/fe2b7a45

Branch: refs/heads/master
Commit: fe2b7a4568d65a62da6e6eb00fff05f248b4332c
Parents: 6151f29
Author: Yinan Li <y...@google.com>
Authored: Mon Apr 2 12:20:55 2018 -0700
Committer: Anirudh Ramanathan <ramanath...@google.com>
Committed: Mon Apr 2 12:20:55 2018 -0700

--
 docs/running-on-kubernetes.md   | 15 ---
 .../org/apache/spark/deploy/k8s/Config.scala|  6 +
 .../cluster/k8s/ExecutorPodFactory.scala| 12 ++---
 .../cluster/k8s/ExecutorPodFactorySuite.scala   | 27 
 4 files changed, 53 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/fe2b7a45/docs/running-on-kubernetes.md
--
diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md
index 975b28d..9c46449 100644
--- a/docs/running-on-kubernetes.md
+++ b/docs/running-on-kubernetes.md
@@ -549,14 +549,23 @@ specific to Spark on Kubernetes.
   spark.kubernetes.driver.limit.cores
   (none)
   
-Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
+Specify a hard cpu 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
   
 
 
+  spark.kubernetes.executor.request.cores
+  (none)
+  
+Specify the cpu request for each executor pod. Values conform to the 
Kubernetes 
[convention](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu).
 
+Example values include 0.1, 500m, 1.5, 5, etc., with the definition of cpu 
units documented in [CPU 
units](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units).
   
+This is distinct from spark.executor.cores: it is only used 
and takes precedence over spark.executor.cores for specifying the 
executor pod cpu request if set. Task 
+parallelism, e.g., number of tasks an executor can run concurrently is not 
affected by this.
+
+
   spark.kubernetes.executor.limit.cores
   (none)
   
-Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
+Specify a hard cpu 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
   
 
 
@@ -593,4 +602,4 @@ specific to Spark on Kubernetes.
spark.kubernetes.executor.secrets.spark-secret=/etc/secrets.
   
 
-
\ No newline at end of file
+

http://git-wip-us.apache.org/repos/asf/spark/blob/fe2b7a45/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
--
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
index da34a7e..405ea47 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
@@ -91,6 +91,12 @@ private[spark] object Config extends L

spark git commit: [SPARK-23618][K8S][BUILD] Initialize BUILD_ARGS in docker-image-tool.sh

2018-03-12 Thread foxish
Repository: spark
Updated Branches:
  refs/heads/master b304e07e0 -> d5b41aea6


[SPARK-23618][K8S][BUILD] Initialize BUILD_ARGS in docker-image-tool.sh

## What changes were proposed in this pull request?

This change initializes BUILD_ARGS to an empty array when $SPARK_HOME/RELEASE 
exists.

In function build, "local BUILD_ARGS" effectively creates an array of one 
element where the first and only element is an empty string, so 
"${BUILD_ARGS[]}" expands to "" and passes an extra argument to docker.

Setting BUILD_ARGS to an empty array makes "${BUILD_ARGS[]}" expand to nothing.

## How was this patch tested?

Manually tested.

$ cat RELEASE
Spark 2.3.0 (git revision a0d7949896) built for Hadoop 2.7.3
Build flags: -Phadoop-2.7 -Phive -Phive-thriftserver -Pkafka-0-8 -Pmesos -Pyarn 
-Pkubernetes -Pflume -Psparkr -DzincPort=3036
$ ./bin/docker-image-tool.sh -m t testing build
Sending build context to Docker daemon  256.4MB
...

vanzin

Author: Jooseong Kim <joose...@pinterest.com>

Closes #20791 from jooseong/SPARK-23618.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d5b41aea
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d5b41aea
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d5b41aea

Branch: refs/heads/master
Commit: d5b41aea62201cd5b1baad2f68f5fc7eb99c62c5
Parents: b304e07
Author: Jooseong Kim <joose...@pinterest.com>
Authored: Mon Mar 12 11:31:34 2018 -0700
Committer: foxish <ramanath...@google.com>
Committed: Mon Mar 12 11:31:34 2018 -0700

--
 bin/docker-image-tool.sh | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/d5b41aea/bin/docker-image-tool.sh
--
diff --git a/bin/docker-image-tool.sh b/bin/docker-image-tool.sh
index 0714063..0d0f564 100755
--- a/bin/docker-image-tool.sh
+++ b/bin/docker-image-tool.sh
@@ -57,6 +57,7 @@ function build {
   else
 # Not passed as an argument to docker, but used to validate the Spark 
directory.
 IMG_PATH="kubernetes/dockerfiles"
+BUILD_ARGS=()
   fi
 
   if [ ! -d "$IMG_PATH" ]; then


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org