You should use `SparkConf.set` rather than `SparkConf.setExecutorEnv`. For
driver configurations, you need to set them before starting your
application. You can use the `--conf` argument before running
`spark-submit`.

Best Regards,
Shixiong Zhu

2015-11-04 15:55 GMT-08:00 William Li <a-...@expedia.com>:

> Hi All – I have a four worker node cluster, each with 8GB memory. When I
> submit a job, the driver node takes 1gb memory, each worker node only
> allocates one executor, also just take 1gb memory. The setting of the job
> has:
>
> sparkConf
>   .setExecutorEnv("spark.driver.memory", "6g")
>   .setExecutorEnv("spark.dynamicAllocation.enabled", "true")
>   .setExecutorEnv("spark.executor.cores","8")
>   .setExecutorEnv("spark.executor.memory", "6g*”*)
>
>
> Any one knows how to make the worker or executor use more memory?
>
>
> Thanks,
>
>
> William.
>
>
>

Reply via email to