Github user jason-dai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r155787931
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark-base/Dockerfile
---
@@ -0,0 +1,47 @@
+#
+# Licensed to the Apache
Github user jason-dai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r155710219
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark-base/Dockerfile
---
@@ -0,0 +1,47 @@
+#
+# Licensed to the Apache
Github user jason-dai commented on the pull request:
https://github.com/apache/spark/pull/1297#issuecomment-79776523
@jegonzal I wonder if you can share more details on your stack overflow
issue. We were considering a general fix (e.g., as I outlined in
https://issues.apache.org/jira
Github user jason-dai commented on the pull request:
https://github.com/apache/spark/pull/3545#issuecomment-65360084
I believe ClosureCleaner.clean() is defined to deal with exactly this
issue: scala may capture the entire class in closure, even if only one member
variable is used
Github user jason-dai commented on the pull request:
https://github.com/apache/spark/pull/3545#issuecomment-65345271
Maybe we can try something like:
class ZippedPartitionsRDD2 (sc, f, â¦) {
val cleanF(part1, part2, ctx) = sc.clean(f(rdd1.iterator(part1, ctx
Github user jason-dai commented on the pull request:
https://github.com/apache/spark/pull/3549#issuecomment-65343799
Maybe we can try something like:
class ZippedPartitionsRDD2 (sc, f, â¦) {
val cleanF(part1, part2, ctx) = sc.clean(f(rdd1.iterator(part1, ctx