Github user retronym commented on the issue:
https://github.com/apache/spark/pull/19675
I suppose the problematic case is:
```
scala> :javap p2.C$D#$anonfun$test$1
public static final java.lang.String $anonfun$test$1(p2.C$D,
java.lang.String);
descrip
Github user retronym commented on the issue:
https://github.com/apache/spark/pull/19675
It just occurred to me that my answer above, discussing ways to access the
bytecode of the lambda class, aren't really relevant here. The structure of the
lambda class is always the same: it
Github user retronym commented on the issue:
https://github.com/apache/spark/pull/19675
Another, semi-nuclear option is to use an unsupported option of
`LambdaMetafactory` to dump generated classfiles to disk.
```
â¡ mkdir /tmp/dump
~/code/compiler-benchmark
Github user retronym commented on the issue:
https://github.com/apache/spark/pull/19675
I'm happy to help, but I would appreciate if someone could pose the
question you have about the lambda encoding / `SerializedLambda` in a
standalone fashion.
In the meantime, her
Github user retronym commented on the pull request:
https://github.com/apache/spark/pull/11410#issuecomment-198330775
So long as you've got some defence from a test case, your current solution
seems okay.
---
If your project is set up for it, you can reply to this email and
Github user retronym commented on the pull request:
https://github.com/apache/spark/pull/11410#issuecomment-196676313
Since offering review, I've found a more direct way to get to the object.
```
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8
Github user retronym commented on a diff in the pull request:
https://github.com/apache/spark/pull/11410#discussion_r55785695
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/OuterScopes.scala
---
@@ -39,4 +41,47 @@ object OuterScopes {
def
Github user retronym commented on a diff in the pull request:
https://github.com/apache/spark/pull/11410#discussion_r54647099
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/OuterScopes.scala
---
@@ -39,4 +41,35 @@ object OuterScopes {
def
Github user retronym commented on a diff in the pull request:
https://github.com/apache/spark/pull/11410#discussion_r54557328
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/OuterScopes.scala
---
@@ -39,4 +41,35 @@ object OuterScopes {
def
Github user retronym commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19324410
--- Diff: bin/compute-classpath.sh ---
@@ -20,7 +20,7 @@
# This script computes Spark's classpath and prints it to stdout; it's
used by bot
Github user retronym commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60180401
It is a pity that we didn't realise that the finally accepted PR was
insufficient for Spark.
Would you be interested in proposing another PR against scala/
Github user retronym commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-59850925
Was any other design other than a wholesale copy/paste of the REPL
considered? The commit message doesn't reveal much.
We'd be happy to help out ove
12 matches
Mail list logo