[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user erikerlandson commented on the issue: https://github.com/apache/spark/pull/22433 @suryag10, all things being equal, it is considered preferable to provide testing for new functionality on the same PR. Are there are logistical problems adding testing here? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user erikerlandson commented on the issue: https://github.com/apache/spark/pull/22433 @suryag10 you were probably encountering github server problems from yesterday: https://status.github.com/messages --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 I am observing some weird behaviour when i am trying to respond to the comments. Hence i am adding the resposes to comments as below. Following are the responses for the comments: >>The script may be run from a client machine outside a k8s cluster. In this case, there's not even a pod. I would suggest separating the explanation of the user flow details by the deploy mode (client vs cluster). STS is a server and its best way of deployment in K8S cluster is either done through the helm chart or through the yaml file(although it can be done through the method you had suggested, but i guess that scenario would be a rare case and there will be no HA of the STS server if it is triggered from outside). >> In the scenario of a cluster-mode submission, what is the command-line behavior? Does the thrift-server script "block" until the thrift server pod is shut down? By default the script returns but can be made to block by setting the environment variable SPARK_NO_DAEMONIZE. Once this is done, script blocks until the thrift server pod is shut down >>If possible, there should be some basic integration testing. Run a thrift server command against the minishift cluster used by the other testing. Will add it as a separate PR. Pls merge this, if you are ok with the responses. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 Can some body pls merge this? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 Can some body pls merge this? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 . --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > If possible, there should be some basic integration testing. Run a thrift server command against the minishift cluster used by the other testing. Will add this a separate PR. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > In the scenario of a cluster-mode submission, what is the command-line behavior? Does the thrift-server script "block" until the thrift server pod is shut down? By default the script returns but can be made to block by setting the environment variable SPARK_NO_DAEMONIZE. Once this is done, script blocks until the thrift server pod is shut down --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > In the scenario of a cluster-mode submission, what is the command-line behavior? Does the thrift-server script "block" until the thrift server pod is shut down? By default the script returns but can be made to block by setting the environment variable SPARK_NO_DAEMONIZE. Once this is done, script blocks until the thrift server pod is shut down --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 @liyinan926 >>The script may be run from a client machine outside a k8s cluster. In this case, there's not even a pod. >>I would suggest separating the explanation of the user flow details by the deploy mode (client vs cluster). --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 @liyinan926 >>The script may be run from a client machine outside a k8s cluster. In this case, there's not even a pod. >>I would suggest separating the explanation of the user flow details by the deploy mode (client vs cluster). STS is a server and its best way of deployment in K8S cluster is either done through the helm chart or through the yaml file(although it can be done through the method you had suggested, but i guess that scenario would be a rare case and there will be no HA of the STS server if it is triggered from outside). --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user erikerlandson commented on the issue: https://github.com/apache/spark/pull/22433 If possible, there should be some basic integration testing. Run a thrift server command against the minishift cluster used by the other testing. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user erikerlandson commented on the issue: https://github.com/apache/spark/pull/22433 In the scenario of a cluster-mode submission, what is the command-line behavior? Does the thrift-server script "block" until the thrift server pod is shut down? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 can somebody pls review and merge? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user nrchakradhar commented on the issue: https://github.com/apache/spark/pull/22433 This PR is now same as [PR-20272](https://github.com/apache/spark/pull/20272). The conversation in [PR-20272](https://github.com/apache/spark/pull/20272), has some useful information which can be included in the Spark documentation Also, its good to mention that STS will not work with dynamicAllocation as shuffle support is not yet available. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 @mridulm @liyinan926 @jacobdr @ifilonenko code check for space,"/" handling is already present at https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L259 I had reverted back the fix in start-thriftserver.sh. Please review and merge. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > > Agreed with @mridulm that the naming restriction is specific to k8s and should be handled in a k8s specific way, e.g., somewhere around https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L208. > > Ok, Will update the PR with the same. Hi, Handling of this conversion is already present in https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L259 I had reverted back the change in start-thriftserver.sh file. Please review and merge. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > Agreed with @mridulm that the naming restriction is specific to k8s and should be handled in a k8s specific way, e.g., somewhere around https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L208. Ok, Will update the PR with the same. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user liyinan926 commented on the issue: https://github.com/apache/spark/pull/22433 Agreed with @mridulm that the naming restriction is specific to k8s and should be handled in a k8s specific way, e.g., somewhere around https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L208. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > I'm wondering, is there some reason this isn't supported in cluster mode for yarn & mesos? Or put another way, what is the rationale for k8s being added as an exception to this rule? I donno the specific reason why this was not supported in yarn and mesos. The initial contributions to the spark on K8S started with cluster mode(with restriction for client mode). So this PR enhances such that STS can run in k8s deployments with spark cluster mode(In the latest spark code i had observed that the client mode also works(need to cross verify this once)). --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user mridulm commented on the issue: https://github.com/apache/spark/pull/22433 > As this script is common start point for all the resource managers(k8s/yarn/mesos/standalone/local), i guess changing this to fit for all the cases has a value add, instead of doing at each resource manager level. Thoughts? Please note that I am specifically referring only to the need for changing application `name`. The rationale given that `name` should be DNS compliant is a restriction specific to k8s and not spark. Instead of doing one off rename's the right approach would be to handle this name translation such that it will benefit not just STS, but any user application. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user jacobdr commented on the issue: https://github.com/apache/spark/pull/22433 > a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.' Your changes to the name handling donât comply with this, so agree with @mridulm you should move this change elsewhere and more broadly support name validation/sanitization for submitted applications in kubernetes --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user erikerlandson commented on the issue: https://github.com/apache/spark/pull/22433 I'm wondering, is there some reason this isn't supported in cluster mode for yarn & mesos? Or put another way, what is the rationale for k8s being added as an exception to this rule? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > It is an implementation detail of k8s integration that application name is expected to be DNS compliant ... spark does not have that requirement; and yarn/mesos/standalone/local work without this restriction. > The right fix in k8s integration would be to sanitize the name specified by user/application to be compliant with its requirements. This will help not just with thrift server, but any spark application. As this script is common start point for all the resource managers(k8s/yarn/mesos/standalone/local), i guess changing this to fit for all the cases has a value add, instead of doing at each resource manager level. Thoughts? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user mridulm commented on the issue: https://github.com/apache/spark/pull/22433 It is an implementation detail of k8s integration that application name is expected to be DNS compliant ... spark does not have that requirement; and yarn/mesos/standalone/local work without this restriction. The right fix in k8s integration would be to sanitize the name specified by user/application to be compliant with its requirements. This will help not just with thrift server, but any spark application. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/22433 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/96105/ Test FAILed. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/22433 Merged build finished. Test FAILed. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/22433 **[Test build #96105 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96105/testReport)** for PR 22433 at commit [`3a7fa57`](https://github.com/apache/spark/commit/3a7fa571181e4b0494f2b705fbd07bc61b0ca6ce). * This patch **fails due to an unknown error code, -9**. * This patch merges cleanly. * This patch adds no public classes. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user suryag10 commented on the issue: https://github.com/apache/spark/pull/22433 > Does it fail in k8s or does spark k8s code error out ? > If former, why not fix ânameâ handling in k8s to replace unsupported characters ? Following is the error seen without the fix: Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://k8s-apiserver.bcmt.cluster.local:8443/api/v1/namespaces/default/pods. Message: Pod "thrift jdbc/odbc server-1537079590890-driver" is invalid: metadata.name: Invalid value: "thrift jdbc/odbc server-1537079590890-driver": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'). Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=metadata.name, message=Invalid value: "thrift jdbc/odbc server-1537079590890-driver": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0- 9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), reason=FieldValueInvalid, additionalProperties={})], group=null, kind=Pod, name=thrift jdbc/odbc server-1537079590890-driver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "thrift jdbc/odbc server-1537079590890-driver" is invalid: metadata.name: Invalid value: "thrift jdbc/odbc server-1537079590890-driver": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}). This is not specific to Kubernetes, but more of a generic DNS (DNS-1123) --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #22433: [SPARK-25442][SQL][K8S] Support STS to run in k8s deploy...
Github user mridulm commented on the issue: https://github.com/apache/spark/pull/22433 Does it fail in k8s or does spark k8s code error out ? If former, why not fix ânameâ handling in k8s to replace unsupported characters ? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org