Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/22433
@suryag10, all things being equal, it is considered preferable to provide
testing for new functionality on the same PR. Are there are logistical problems
adding testing here?
---
Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/22433
@suryag10 you were probably encountering github server problems from
yesterday:
https://status.github.com/messages
---
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
I am observing some weird behaviour when i am trying to respond to the
comments. Hence i am adding the resposes to comments as below.
Following are the responses for the comments:
>>The
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
Can some body pls merge this?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
Can some body pls merge this?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> If possible, there should be some basic integration testing. Run a thrift
server command against the minishift cluster used by the other testing.
Will add this a separate PR.
---
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> In the scenario of a cluster-mode submission, what is the command-line
behavior? Does the thrift-server script "block" until the thrift server pod is
shut down?
By default the script
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> In the scenario of a cluster-mode submission, what is the command-line
behavior? Does the thrift-server script "block" until the thrift server pod is
shut down?
By default the script
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
@liyinan926
>>The script may be run from a client machine outside a k8s cluster. In
this case, there's not even a pod. >>I would suggest separating the explanation
of the user flow details by
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
@liyinan926
>>The script may be run from a client machine outside a k8s cluster. In
this case, there's not even a pod. >>I would suggest separating the explanation
of the user flow details by
Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/22433
If possible, there should be some basic integration testing. Run a thrift
server command against the minishift cluster used by the other testing.
---
Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/22433
In the scenario of a cluster-mode submission, what is the command-line
behavior? Does the thrift-server script "block" until the thrift server pod is
shut down?
---
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
can somebody pls review and merge?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user nrchakradhar commented on the issue:
https://github.com/apache/spark/pull/22433
This PR is now same as
[PR-20272](https://github.com/apache/spark/pull/20272).
The conversation in [PR-20272](https://github.com/apache/spark/pull/20272),
has some useful information which
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
@mridulm @liyinan926 @jacobdr @ifilonenko
code check for space,"/" handling is already present at
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> > Agreed with @mridulm that the naming restriction is specific to k8s and
should be handled in a k8s specific way, e.g., somewhere around
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> Agreed with @mridulm that the naming restriction is specific to k8s and
should be handled in a k8s specific way, e.g., somewhere around
Github user liyinan926 commented on the issue:
https://github.com/apache/spark/pull/22433
Agreed with @mridulm that the naming restriction is specific to k8s and
should be handled in a k8s specific way, e.g., somewhere around
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> I'm wondering, is there some reason this isn't supported in cluster mode
for yarn & mesos? Or put another way, what is the rationale for k8s being added
as an exception to this rule?
I
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/22433
> As this script is common start point for all the resource
managers(k8s/yarn/mesos/standalone/local), i guess changing this to fit for all
the cases has a value add, instead of doing at each
Github user jacobdr commented on the issue:
https://github.com/apache/spark/pull/22433
> a DNS-1123 subdomain must consist of lower case alphanumeric characters,
'-' or '.'
Your changes to the name handling donât comply with this, so agree with
@mridulm you should move
Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/22433
I'm wondering, is there some reason this isn't supported in cluster mode
for yarn & mesos? Or put another way, what is the rationale for k8s being added
as an exception to this rule?
---
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> It is an implementation detail of k8s integration that application name
is expected to be DNS compliant ... spark does not have that requirement; and
yarn/mesos/standalone/local work without
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/22433
It is an implementation detail of k8s integration that application name is
expected to be DNS compliant ... spark does not have that requirement; and
yarn/mesos/standalone/local work without this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22433
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/96105/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22433
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/22433
**[Test build #96105 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96105/testReport)**
for PR 22433 at commit
Github user suryag10 commented on the issue:
https://github.com/apache/spark/pull/22433
> Does it fail in k8s or does spark k8s code error out ?
> If former, why not fix ânameâ handling in k8s to replace unsupported
characters ?
Following is the error seen without the
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/22433
Does it fail in k8s or does spark k8s code error out ?
If former, why not fix ânameâ handling in k8s to replace unsupported
characters ?
---
30 matches
Mail list logo