Default is 10mb. Depends on memory available, and what the network transfer 
effects are going to be. You can specify spark.sql.autoBroadcastJoinThreshold 
to increase the threshold in case of spark sql. But you definitely shouldn't be 
broadcasting gigabytes.
________________________________
From: V0lleyBallJunki3 <venkatda...@gmail.com>
Sent: 10 April 2019 10:06
To: user@spark.apache.org
Subject: Unable to broadcast a very large variable

Hello,
   I have a 110 node cluster with each executor having 50 GB memory and I
want to broadcast a variable of 70GB with each machine have 244 GB of
memory. I am having difficulty doing that. I was wondering at what size is
it unwise to broadcast a variable. Is there a general rule of thumb?



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to