You're running spark-shell. It already creates a SparkContext for you and
makes it available in a variable called "sc".

If you want to change the config of spark-shell's context, you need to use
command line option. (Or stop the existing context first, although I'm not
sure how well that will work.)

On Tue, Sep 13, 2016 at 10:49 AM, Kevin Burton <bur...@spinn3r.com> wrote:

> I'm rather confused here as to what to do about creating a new
> SparkContext.
>
> Spark 2.0 prevents it... (exception included below)
>
> yet a TON of examples I've seen basically tell you to create a new
> SparkContext as standard practice:
>
> http://spark.apache.org/docs/latest/configuration.html#
> dynamically-loading-spark-properties
>
> val conf = new SparkConf()
>              .setMaster("local[2]")
>              .setAppName("CountingSheep")val sc = new SparkContext(conf)
>
>
> I'm specifically running into a problem in that ES hadoop won't work with
> its settings and I think its related to this problme.
>
> Do we have to call sc.stop() first and THEN create a new spark context?
>
> That works,, but I can't find any documentation anywhere telling us the
> right course of action.
>
>
>
> scala> val sc = new SparkContext();
> org.apache.spark.SparkException: Only one SparkContext may be running in
> this JVM (see SPARK-2243). To ignore this error, set 
> spark.driver.allowMultipleContexts
> = true. The currently running SparkContext was created at:
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.
> scala:823)
> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
> <init>(<console>:15)
> <init>(<console>:31)
> <init>(<console>:33)
> .<init>(<console>:37)
> .<clinit>(<console>)
> .$print$lzycompute(<console>:7)
> .$print(<console>:6)
> $print(<console>)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:497)
> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
> loadAndRunReq$1.apply(IMain.scala:638)
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
> loadAndRunReq$1.apply(IMain.scala:637)
> scala.reflect.internal.util.ScalaClassLoader$class.
> asContext(ScalaClassLoader.scala:31)
> scala.reflect.internal.util.AbstractFileClassLoader.asContext(
> AbstractFileClassLoader.scala:19)
>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$
> 2.apply(SparkContext.scala:2221)
>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$
> 2.apply(SparkContext.scala:2217)
>   at scala.Option.foreach(Option.scala:257)
>   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(
> SparkContext.scala:2217)
>   at org.apache.spark.SparkContext$.markPartiallyConstructed(
> SparkContext.scala:2290)
>   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
>   at org.apache.spark.SparkContext.<init>(SparkContext.scala:121)
>   ... 48 elided
>
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
>
>


-- 
Marcelo

Reply via email to