job configured, build running:
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-ubuntu-scala-2.12/3/
on the bright(er) side, since i tested the crap out of this build on the
new ubuntu nodes, i've set this new job to run there. :)
shane
On
i'll get something set up quickly by hand today, and make a TODO to get the
job config checked in to the jenkins job builder configs later this week.
shane
On Sun, Aug 5, 2018 at 7:10 AM, Sean Owen wrote:
> Shane et al - could we get a test job in Jenkins to test the Scala 2.12
> build? I
The root cause for a case where closure cleaner is involved is described
here: https://github.com/apache/spark/pull/22004/files#r207753682 but I am
also waiting for some feedback from Lukas Rytz why this even worked in
2.11.
If it is something that needs fix and can be fixed we will fix and add
A spark user’s expectation would be that any closure which worked in 2.11
will continue to work in 2.12 (exhibiting same behavior wrt functionality,
serializability, etc).
If there are behavioral changes, we will need to understand what they are -
but expection would be that they are minimal (if
Closure cleaner's initial purpose AFAIK is to clean the dependencies
brought in with outer pointers (compiler's side effect). With LMFs in Scala
2.12 there are no outer pointers, that is why in the new design document we
kept the implementation minimal focusing on the return statements (it was
I agree, we should not work around the testcase but rather understand
and fix the root cause.
Closure cleaner should have null'ed out the references and allowed it
to be serialized.
Regards,
Mridul
On Sun, Aug 5, 2018 at 8:38 PM Wenchen Fan wrote:
>
> It seems to me that the closure cleaner
It seems to me that the closure cleaner fails to clean up something. The
failed test case defines a serializable class inside the test case, and the
class doesn't refer to anything in the outer class. Ideally it can be
serialized after cleaning up the closure.
This is somehow a very weird way to
Makes sense, not sure if closure cleaning is related to the last one for
example or others. The last one is a bit weird, unless I am missing
something about the LegacyAccumulatorWrapper logic.
Stavros
On Sun, Aug 5, 2018 at 10:23 PM, Sean Owen wrote:
> Yep that's what I did. There are more
Yep that's what I did. There are more failures with different resolutions.
I'll open a JIRA and PR and ping you, to make sure that the changes are all
reasonable, and not an artifact of missing something about closure cleaning
in 2.12.
In the meantime having a 2.12 build up and running for master
Hi Sean,
I run a quick build so the failing tests seem to be:
- SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stagesstill behave correctly on fetch failures
*** FAILED ***
A job with one fetch failure should eventually succeed
(DAGSchedulerSuite.scala:2422)
Shane et al - could we get a test job in Jenkins to test the Scala 2.12
build? I don't think I have the access or expertise for it, though I could
probably copy and paste a job. I think we just need to clone the, say,
master Maven Hadoop 2.7 job, and add two steps: run
11 matches
Mail list logo