I am setting up *Spark 2.2.0 in standalone mode* (
https://spark.apache.org/docs/latest/spark-standalone.html) and submitting
spark jobs programatically using
SparkLauncher sparkAppLauncher = new
SparkLauncher(userNameMap).setMaster(sparkMaster).setAppName(appName).;
SparkAppHandle
Hi Julien,
Thanks for the suggestion. If we don't do a broadcast, that would
presumably affect the performance of the job, as the model that is failing
to be broadcast is something that we need to be shared across the cluster.
But it may be worth it if the trade-off is not having things run
Hi,
We are facing below error in spark 2.4 intermittently when saving the
managed table from spark.
Error -
pyspark.sql.utils.AnalysisException: u"Can not create the managed
table('`hive_issue`.`table`'). The associated