[
https://issues.apache.org/jira/browse/TINKERPOP3-911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14970011#comment-14970011
]
Russell Alexander Spitzer edited comment on TINKERPOP3-911 at 10/22/15 10:26 PM:
---------------------------------------------------------------------------------
Properties that you may want to change on a per thread basis
{code}
private[spark] val SPARK_JOB_DESCRIPTION = "spark.job.description"
private[spark] val SPARK_JOB_GROUP_ID = "spark.jobGroup.id"
private[spark] val SPARK_JOB_INTERRUPT_ON_CANCEL =
"spark.job.interruptOnCancel"
private[spark] val RDD_SCOPE_KEY = "spark.rdd.scope"
private[spark] val RDD_SCOPE_NO_OVERRIDE_KEY = "spark.rdd.scope.noOverride"
{code}
And
{code}
"spark.scheduler.pool"
{code}
was (Author: rspitzer):
Properties that you may want to change on a per thread basis
{code}
private[spark] val SPARK_JOB_DESCRIPTION = "spark.job.description"
private[spark] val SPARK_JOB_GROUP_ID = "spark.jobGroup.id"
private[spark] val SPARK_JOB_INTERRUPT_ON_CANCEL =
"spark.job.interruptOnCancel"
private[spark] val RDD_SCOPE_KEY = "spark.rdd.scope"
private[spark] val RDD_SCOPE_NO_OVERRIDE_KEY = "spark.rdd.scope.noOverride"
{code}
> Allow setting Thread Specific Spark JobGroup/Custom Properties based on
> hadoop conf
> -----------------------------------------------------------------------------------
>
> Key: TINKERPOP3-911
> URL: https://issues.apache.org/jira/browse/TINKERPOP3-911
> Project: TinkerPop 3
> Issue Type: Improvement
> Components: hadoop
> Reporter: Russell Alexander Spitzer
> Assignee: Marko A. Rodriguez
>
> When using a Persistant Spark context it can be beneficial to pass in new
> configuration options for new users/ GraphComputers. Currently the
> .getOrCreate call will always use the configuration from the initial
> construction. To work around this we should iterate over all of the
> properties passed into the graph computer and set them as local context
> properties on the thread we are operating on.
> See
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L630-L640
> This would let different graph computers set different spark properties for
> use with things like the Spark Fair Scheduler.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)