I think the straight answer would be No, but yes you can actually hardcode
these parameters if you want. Look in the SparkContext.scala
<https://github.com/apache/spark/blob/master/core%2Fsrc%2Fmain%2Fscala%2Forg%2Fapache%2Fspark%2FSparkContext.scala#L364>
where all these properties are being initialized, if you hard code it there
and rebuild your spark then it doesn't matter what the user set.

Thanks
Best Regards

On Sat, Jun 13, 2015 at 10:56 AM, YaoPau <jonrgr...@gmail.com> wrote:

> For example, Hive lets you set a whole bunch of parameters (# of reducers,
> #
> of mappers, size of reducers, cache size, max memory to use for a join),
> while Impala gives users a much smaller subset of parameters to work with,
> which makes it nice to give to a BI team.
>
> Is there a way to restrict which parameters a user can set for a Spark job?
> Maybe to cap the # of executors, or cap the memory for each executor, or to
> enforce a default setting no matter what parameters are used.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Are-there-ways-to-restrict-what-parameters-users-can-set-for-a-Spark-job-tp23301.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to