Hi, I'm trying to work with Spark from the shell and create a Hadoop Job instance. I get the exception you see below because the Job.toString doesn't like to be called until it has been submitted.
I tried using the :silent command but that didn't seem to have any impact. scala> import org.apache.hadoop.mapreduce.Job scala> val job = new Job() java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING at org.apache.hadoop.mapreduce.Job.ensureState(Job.java:283) at org.apache.hadoop.mapreduce.Job.toString(Job.java:462) at scala.runtime.ScalaRunTime$.scala$runtime$ScalaRunTime$$inner$1(ScalaRunTime.scala:324) at scala.runtime.ScalaRunTime$.stringOf(ScalaRunTime.scala:329) at scala.runtime.ScalaRunTime$.replStringOf(ScalaRunTime.scala:337) at .<init>(<console>:10) at .<clinit>(<console>) ... Any help would be greatly appreciated! Thanks, Alex