Github user daisukebe closed the pull request at:
https://github.com/apache/spark/pull/16195
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user daisukebe commented on the issue:
https://github.com/apache/spark/pull/16195
Oh, I see. I was not aware of that.. Please close this PR then. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user daisukebe commented on the issue:
https://github.com/apache/spark/pull/16195
@vanzin , 2.0 already has this capability per
https://issues.apache.org/jira/browse/SPARK-529, thus my patch targets on 1.6.
---
If your project is set up for it, you can reply to this email and
Github user daisukebe commented on a diff in the pull request:
https://github.com/apache/spark/pull/16195#discussion_r91589664
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala ---
@@ -61,11 +61,15 @@ private[spark] class ClientArguments(args:
Array
Github user daisukebe commented on the issue:
https://github.com/apache/spark/pull/16195
This doesn't look like correlating with my code change. Should I fix this,
@vanzin?
```
Traceback (most recent call last):
File "./dev/run-tests-jenkins.py"
Github user daisukebe commented on the issue:
https://github.com/apache/spark/pull/16195
Per @vanzin's suggestion,
- revised the code style,
- dded a new default variable,
- and also fixed the warning: "
---
If your project is set up for it, you can rep
GitHub user daisukebe opened a pull request:
https://github.com/apache/spark/pull/16195
[Spark-18765] [CORE] Make values for
spark.yarn.{am|driver|executor}.memoryOverhead have configurable units
## What changes were proposed in this pull request?
Make values for
Github user daisukebe commented on the pull request:
https://github.com/apache/spark/pull/6234#issuecomment-103308933
Thanks guys. Then, does adding the following make sense?
> If the Spark version is prior to 1.3.0, user needs to explicitly imp
GitHub user daisukebe opened a pull request:
https://github.com/apache/spark/pull/6234
Updating Programming Guides per SPARK-4397
The change per SPARK-4397 makes implicit objects in SparkContext to be
found by the compiler automatically. So that we don't need to impor