Hi Greg,

It does seem like a bug. What is the particular exception message that you
see?

Andrew

2014-10-08 12:12 GMT-07:00 Greg Hill <greg.h...@rackspace.com>:

>  So, I think this is a bug, but I wanted to get some feedback before I
> reported it as such.  On Spark on YARN, 1.1.0, if you specify the
> --driver-memory value to be higher than the memory available on the client
> machine, Spark errors out due to failing to allocate enough memory.  This
> happens even in yarn-cluster mode.  Shouldn't it only allocate that memory
> on the YARN node that is going to run the driver process, not the local
> client machine?
>
>  Greg
>
>

Reply via email to