So, I think this is a bug, but I wanted to get some feedback before I reported 
it as such.  On Spark on YARN, 1.1.0, if you specify the --driver-memory value 
to be higher than the memory available on the client machine, Spark errors out 
due to failing to allocate enough memory.  This happens even in yarn-cluster 
mode.  Shouldn't it only allocate that memory on the YARN node that is going to 
run the driver process, not the local client machine?

Greg

Reply via email to