Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/22120
It is not that the change is trivial, it's that I don't see the point of
it. There is no concept of "attempts" in Spark client mode. So why fill in this
information at all?
It's an "option"
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22120
@vanzin I agree its a trivial change. Just wanted it to be consistent
output with yarn cluster mode. This is not just for event logs also for a
custom SparkListener , it may be confusing that appId
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/22120
Is this really necessary? It will always be "1", since client-mode apps are
not re-tried (the YARN AM might be, but the driver is not). That makes it not
really useful.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22120
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22120
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22120
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional