Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17364
thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17364
merged to master
sorry I forgot to take look at this for a while @steveloughran, thanks for
the reminder
---
If your project is set up for it, you can reply to this email and have your
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17364
@squito Is this ready to go in? Like I warned, I'm not going to add tests
for this, not on its own
---
If your project is set up for it, you can reply to this email and have your
reply
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17364
I don't have a time/plans to do the test here, as it's a fairly complex
piece of test setup for what a review should show isn't doing anything other
than guarantee the outcome pf
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17364
looking some more, yes, as `tryWithSafeFinallyAndFailureCallbacks` wraps
task commit, it guarantees that the original cause doesn't get lost. The
abortJob code isn't so well guarded, and
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17364
> Note that as the exception handler tries to close resources before
calling committer.abortTask(taskAttemptContext), without this patch a failing
releaseResources() means that abortTask() isn't
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17364
Created [SPARK-20045](https://issues.apache.org/jira/browse/SPARK-20045). I
think there's room to improve resilience in the abort code, primarily to ensure
that the underlying failure cause
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17364
I haven't reviewed that bit of code: make it a separate JIRA and assign to
me. This one I came across in the HADOOP-2.8.0 RC3 testing; the underlying fix
there is going in, but the spark code
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17364
@steveloughran, maybe this is strictly not related with the problem
specified in JIRA but do you know if we should do the same thing to
`SparkHadoopMapReduceWriter`? I remember I had to fix the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17364
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17364
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74903/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17364
**[Test build #74903 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/74903/testReport)**
for PR 17364 at commit
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17364
Note that as [the exception
handler](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala#L244)
tries to close
13 matches
Mail list logo