Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16620
@markhamstra @squito
Thanks a lot for your helpful comments.
I made a unit test for this fix and changed the patch. Now it can pass all
unit tests for me locally.
In this fix: add a
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16620
@squito
`SchedulerIntegrationSuite` is very helpful. I like it very much, I can
reproduce this issue in `SchedulerIntegrationSuite` now.
To fix this issue, it is more complicated than I
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@vanzin
Sorry for the stupid mistake I made. I've changed. Please take another look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@vanzin
Thanks for your comments.I have changed the unit test. Could you take
another look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/16620
[SPARK-19263] DAGScheduler should handle stage's pendingPartitions properly
in handleTaskCompletion.
## What changes were proposed in this pull request?
In current `DAGSche
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@vanzin @ash211
Thanks a lot for your comments; I've changed accordingly. Please give
another look at this~~
---
If your project is set up for it, you can reply to this email and have
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16503#discussion_r96127359
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
---
@@ -221,6 +229,22 @@ private case class
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16503#discussion_r96120047
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
---
@@ -221,6 +232,17 @@ private case class
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@vanzin @zsxwing
Thanks a lot for your comment. I will file another jira to add a blocking
version of ask.
What else can I do for this pr : ) ?
---
If your project is set up for it, you
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@ash211
Thanks a lot for your comment. I've already fixed the failing Scala style
tests. Running `./dev/scalastyle` passed. Could you give another look?
---
If your project is set up f
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@ash211
Thank you so much for your comment. I've changed accordingly.
Could you please give another look?
---
If your project is set up for it, you can reply to this email and have
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
ping @zsxwing @vanzin
Could you give another look at this please ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
>If we can remove uses of askWithRetry as we find these issues, we can, at
some point, finally get rid of the API altogether.
How do you think about providing a *"blocking&qu
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@vanzin
Thanks a lot for your comment. It's very helpful.
I'll change it to `ask`.
I think it make sense to keep receiver idempotent when handling
`AskPermissionToCommitOut
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@zsxing, @vanzin
Maybe using `ask` in method `canCommit` is not suitable(i think). Because
`ask` returns a Future, but it should be a blocking process to get result of
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@zsxwing @kayousterhout @andrewor14 Could you please help take a look at
this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@mccheah @JoshRosen @ash211 Could you please take look at this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/16503
[SPARK-18113] Method canCommit should return the same value when callâ¦
â¦ed by the same attempt multi times.
## What changes were proposed in this pull request?
Method
701 - 719 of 719 matches
Mail list logo