Github user liyichao commented on the issue:
https://github.com/apache/spark/pull/18070
Oh, I did not notice that, since @nlyu follows up, I will close this pr now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18070
there is actually another pull request up that does this same thing:
https://github.com/apache/spark/pull/18819
---
If your project is set up for it, you can reply to this email and have yo
Github user liyichao commented on the issue:
https://github.com/apache/spark/pull/18070
I will update the pr in a day.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/18070
ping @liyichao Will you address the latest comments from @tgravescs ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18070
sorry for my delay on getting back to this.
So if we do that you would have to have taskKilledReason extend
TaskFailedReason so because things rely on the countTowardsTaskFailures field.
Then
Github user liyichao commented on the issue:
https://github.com/apache/spark/pull/18070
How about Letting TaskCommitDenied and TaskKilled extend a same trait (for
example, TaskKilledReason)? This way when accounting metrics, TaskCommitDenied
and TaskKilled are all contributing to task
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18070
thanks for the udpates. I was testing this out by running large job with
speculative tasks and I am still seeing the stage summary show failed tasks.
It looks like its due to this code:
http
Github user liyichao commented on the issue:
https://github.com/apache/spark/pull/18070
ping @tgravescs
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18070
sorry the case I was talking about is with a fetch failure. The true abort
stage doesn't happen until it retries 4 times. in that mean time you can have
tasks from the same stage (different attem
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18070
cc @ericl
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the fe
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18070
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
11 matches
Mail list logo