It does not look like you're supposed to fiddle with the SparkConf and even
SparkContext in a 'job' (again, I don't know much about jobserver), as
you're given a SparkContext as parameter in the build method.

I guess jobserver initialises the SparkConf and SparkContext itself when it
first starts, meanwhile you're actually creating a new one within your job,
which the github example you mentionned doesn't do, it just uses the context
given as parameter:

def build(sc: SparkContext): RDD[(Reputation, User)] = {
    sc.textFile(inputPath).
      map(User.fromRow).
      collect {
        case Some(user) => user.reputation -> user
      }.
      sortByKey(ascending = false)
  }

I am not sure either how you upload your job's jar to the server (the curl
command you posted does not seem to do so).

Maybe you could try first to make it work on its own as a regular spark app,
without using jobserver.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Set-EXTRA-JAR-environment-variable-for-spark-jobserver-tp20989p20998.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to