I recently upgraded a Structured Streaming application from Spark 2.2.1 ->
Spark 2.3.0. This application runs in yarn-cluster mode, and it made use of
the spark.yarn.{driver|executor}.memoryOverhead properties. I noticed the
job started crashing unexpectedly, and after doing a bunch of digging, it
seems that these properties were migrated to simply be
"spark.driver.memoryOverhead" and "spark.executor.memoryOverhead" - I see
that they existed in the 2.2.1 configuration documentation, but not the
2.3.0 docs.

However, I can't find anything in the release notes between versions that
references this change - should the old spark.yarn.* settings still work,
or were they completely removed in favor the new settings?

Regards,
Will

Reply via email to