jkleckner commented on pull request #28423:
URL: https://github.com/apache/spark/pull/28423#issuecomment-672244924
It looks a bit different from what I see. For me, it appears to get stuck at
the very end of writing data to Bigtable in the very last task of a job. Our
partner is working
jkleckner commented on pull request #28423:
URL: https://github.com/apache/spark/pull/28423#issuecomment-671434643
@liyinan926 Do you think there is an adequate existing fallback mechanism or
do you still believe that there is a need to create a similar patch for
jkleckner commented on pull request #28423:
URL: https://github.com/apache/spark/pull/28423#issuecomment-666524019
@stijndehaes In private discussions about the hang we are seeing, there
appears to be another watcher [1] for the driver watching executors that also
may lose notifications.
jkleckner commented on pull request #28423:
URL: https://github.com/apache/spark/pull/28423#issuecomment-662720174
I took the commits from master and made a partial attempt to rebase this
onto branch-2.4 [1].
However, the k8s api has evolved from 2.4 quite a bit so the watchOrStop
jkleckner commented on pull request #28423:
URL: https://github.com/apache/spark/pull/28423#issuecomment-661271223
Will there be a backport of this to branch-2.4?
This is an automated message from the Apache Git Service.
To
jkleckner commented on pull request #28423:
URL: https://github.com/apache/spark/pull/28423#issuecomment-646336615
+1 for this. Hit this in GKE today.
This is an automated message from the Apache Git Service.
To respond to